How-to guides
Step-by-step guides that cover key tasks and operations in LangSmith.
Setup
See the following guides to set up your LangSmith account.
- Create an account and API key
 - Set up an organization
 - Set up a workspace
 - Set up billing
 - Update invoice email, tax id and, business information
 - Set up access control (enterprise only)
 
Tracing
Get started with LangSmith's tracing features to start adding observability to your LLM applications.
- Annotate code for tracing
 - Toggle tracing on and off
 - Log traces to specific project
 - Set a sampling rate for traces
 - Add metadata and tags to traces
 - Implement distributed tracing
 - Access the current span within a traced function
 - Log multimodal traces
 - Log retriever traces
 - Log custom LLM traces
 - Prevent logging of sensitive data in traces
 - Export traces
 - Share or unshare a trace publicly
 - Compare traces
 - Trace generator functions
 - Trace with 
LangChain- Installation
 - Quick start
 - Trace selectively
 - Log to specific project
 - Add metadata and tags to traces
 - Customize run name
 - Access run (span) ID for LangChain invocations
 - Ensure all traces are submitted before exiting
 - Trace without setting environment variables
 - Distributed tracing with LangChain (Python)
 - Interoperability between LangChain (Python) and LangSmith SDK
 - Interoperability between LangChain.JS and LangSmith SDK
 
 - Trace with 
LangGraph - Trace with 
Instructor(Python only) - Trace with the Vercel 
AI SDK(JS only) - Trace without setting environment variables
 - Trace using the LangSmith REST API
 - Calculate token-based costs for traces
 
Datasets
Manage datasets in LangSmith to evaluate and improve your LLM applications.
- Manage datasets in the application
 - Manage datasets programmatically
 - Version datasets
 - Share or unshare a dataset publicly
 
Evaluation
Evaluate your LLM applications to measure their performance over time.
- Evaluate an LLM application
 - Bind an evaluator to a dataset in the UI
 - Run an evaluation from the prompt playground
 - Evaluate on intermediate steps
 - Use LangChain off-the-shelf evaluators (Python only)
 - Compare experiment results
 - Evaluate an existing experiment
 - Unit test LLM applications (Python only)
 - Run pairwise evaluations
 - Audit evaluator scores
 - Create few-shot evaluators
 - Fetch performance metrics for an experiment
 - Run evals using the API only
 - Upload experiments run outside of LangSmith with the REST API
 
Human feedback
Collect human feedback to improve your LLM applications.
- Capture user feedback from your application to traces
 - Set up a new feedback criteria
 - Annotate traces inline
 - Use annotation queues
 
Monitoring and automations
Leverage LangSmith's powerful monitoring and automations features to make sense of your production data.
- Filter traces in the application
- Create a filter
 - Filter for intermediate runs (spans)
 - Advanced: filter for intermediate runs (spans) on properties of the root
 - Advanced: filter for runs (spans) whose child runs have some attribute
 - Filter based on inputs and outputs
 - Filter based on input / output key-value pairs
 - Copy the filter
 - Manually specify a raw query in LangSmith query language
 - Use an AI Query to auto-generate a query
 
 - Use monitoring charts
 - Set up automation rules
 - Set up online evaluations
 - Set up webhook notifications for rules
 - Set up threads
 
Prompts
Organize and manage prompts in LangSmith to streamline your LLM development workflow.
Playground
Quickly iterate on prompts and models in the LangSmith Playground.