▼Recently, there have been many live broadcasts,make an appointment to ensure you gain something
—1—
The Two Core Concepts of LangChain

—2—
LangGraph Architecture Design
LangGraph is centered around LangChain, used for creating a development framework for cyclic graphs in workflows.
Let’s look at the following requirement:
You want to build a retrieval-augmented generation (RAG) system on a knowledge base. Now, you want to introduce a situation: if the RAG retrieval output does not meet specific quality requirements, the agent/chain should re-retrieve data and rewrite the prompt. Repeat this process until the retrieved data meets the quality threshold.
—3—
Improving RAG with LangGraph
3.1、Initialize the large model. Here, we use the OpenAI API, but other large models can also be used.
from typing import Dict, TypedDict, Optional
from langgraph.graph import StateGraph, END
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
llm = OpenAI(openai_api_key='your API')
3.2、Define a StateGraph, which is the graph object of LangGraph.
class GraphState(TypedDict):
question: Optional[str] = None
classification: Optional[str] = None
response: Optional[str] = None
length: Optional[int] = None
greeting: Optional[str] = None
workflow = StateGraph(GraphState)
3.3、Initialize a RAG retrieval chain from the existing vector database.
def retriever_qa_creation():
embeddings = OpenAIEmbeddings()
db = Chroma(embedding_function=embeddings,persist_directory='/database',collection_name='details')
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=db.as_retriever())
return qa
rag_chain = retriever_qa_creation()
3.4、We will add nodes to this StateGraph.
def classify(question):
return llm("classify intent of given input as greeting or not_greeting. Output just the class.Input:{}".format(question)).strip()
def classify_input_node(state):
question = state.get('question', '').strip()
classification = classify(question)
return {"classification": classification}
def handle_greeting_node(state):
return {"greeting": "Hello! How can I help you today?"}
def handle_RAG(state):
question = state.get('question', '').strip()
prompt = question
if state.get("length") < 30:
search_result = rag_chain.run(prompt)
else:
search_result = rag_chain.run(prompt+'. Return total count only.')
return {"response": search_result,"length":len(search_result)}
def bye(state):
return{"greeting":"The graph has finished"}
workflow.add_node("classify_input", classify_input_node)
workflow.add_node("handle_greeting", handle_greeting_node)
workflow.add_node("handle_RAG", handle_RAG)
workflow.add_node("bye", bye)
-
Use state.get() to read any state variable.
-
The handle_RAG node helps implement the cyclic custom logic we want. If the output length < 30, use prompt A; otherwise, use prompt B. For the first case (when the RAG node has not yet been executed), we will pass length=0 and provide a prompt.
3.5、We will add entry points and edges.
workflow.set_entry_point("classify_input")
workflow.add_edge('handle_greeting', END)
workflow.add_edge('bye', END)
3.6、 We add conditional edges.
def decide_next_node(state):
return "handle_greeting" if state.get('classification') == "greeting" else "handle_RAG"
def check_RAG_length(state):
return "handle_RAG" if state.get("length") > 30 else "bye"
workflow.add_conditional_edges(
"classify_input",
decide_next_node,
{
"handle_greeting": "handle_greeting",
"handle_RAG": "handle_RAG"
}
)
workflow.add_conditional_edges(
"handle_RAG",
check_RAG_length,
{
"bye": "bye",
"handle_RAG": "handle_RAG"
}
)
3.7、Compile and invoke the prompt. Initially, keep the length variable set to 0.
app = workflow.compile()
app.invoke({'question':'Mehul developed which projects?','length':0})
# Output
{'question': 'Mehul developed which projects?',
'classification': 'not_greeting',
'response': ' 4',
'length': 2,
'greeting': 'The graph has finished'}
For the above prompt multiple times, the LangGraph flow is as follows:
classify_input: The sentiment will be not_greeting.
Due to the first conditional edge, move to handle_RAG.
Since length=0, use the first prompt and retrieve the answer (total length will be greater than 30).
Due to the second conditional edge, move again to handle_RAG.
Since length > 30, use the second prompt.
Due to the second conditional edge, move to bye.
End.
To help students thoroughly master the architectural design application of LangChain and LangGraph, I will hold a live broadcast tonight at 8 PM to deeply analyze it with students. Please click the followingappointment button to make a free appointment.
—4—
Live Course on AI Large Model Architecture Design
Scan now, and you canmake a free appointment
Join the live broadcast, experts will be online to answer questions!

This issue has limited slots
Height starts from speed(Don’t be slow!!)
—5—
Get a copy of the “AI Large Model Technology Knowledge Map“

Step 2: After scanning, click the following Follow button to follow me.
Reference: https://mp.weixin.qq.com/s/Qxnny8ZHA_yG_XLQJm_QKg
END