What is LangChain
Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you are able to combine them with other sources of computation or knowledge. This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:
- Question Answering over Specific Documents
- Chatbots
- Agents
Use cases of LangChain
The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.
- Agents: are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.
- Cahtbots: Since language models are good at producing text, that makes them ideal for creating chatbots.
- Data Augmented Generation: Data augmented generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.
- Question Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.
- Summarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
- Evaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
- Generate Similar Examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.
- Comparing Models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
One more thing
Creating flows with LangFlow is easy. Simply drag sidebar components onto the canvas and connect them together to create your pipeline. LangFlow provides a range of LangChain components to choose from, including LLMs, prompt serializers, agents, and chains. Explore by editing prompt parameters, link chains and agents, track an agent’s thought process, and export your flow.
Once you’re done, you can export your flow as a JSON file to use with LangChain.
from langflow import load_flow_from_json
# Load the workflow
flow = load_flow_from_json("path/to/flow.json")
# Now you can use it like any chains
query = "Hey, have you heard of LangFlow?"
answer = flow(query)
Hope you enjoy your journey of prompting!