Easily Create Test Case Generation AI Agent with LangGraph

Why Do You Need This AI Assistant?

  • Complex Requirement Documents
    PRD documents can be dozens of pages long, mixing text and images, making it easy to overlook test points when extracting manually.
  • Time-Consuming Test Case Design
    It requires considering various methods such as equivalence class, boundary value, and exception flows, which can be overwhelming.
  • Difficult Cross-System Collaboration
    Needs to read documents, analyze images, and call testing knowledge bases simultaneously…

Try building an intelligent generation pipeline with LangGraph in 5 steps 👇

1. Overview of LangGraph’s Core Capabilities

Easily Create Test Case Generation AI Agent with LangGraph
LangGraph Architecture Diagram
  • Intelligent Routing
    Automatically assigns tasks like a traffic control center.
  • Tool Invocation
    Can connect to enterprise knowledge bases/API/OCR systems, etc.
  • Status Memory
    Supports resuming from breakpoints when processing long documents.
  • Multi-Round Verification
    Automatically checks for missing test scenarios.

2. Practical Demonstration: From PRD to Test Cases

1. System Flowchart

Easily Create Test Case Generation AI Agent with LangGraph
Test Case Generation Flowchart

2. Key Tool Configuration

# Read enterprise knowledge base tool
@tool
def get_cf_data(url: str):
    """Automatically parses the internal document structure of the company"""
    # Parse document...
    field_descriptions = {
        "title": "Article Title",
        "html": "HTML Format",
        "markdown": "Markdown Format"
    }
    return {"data": data, "field_descriptions": field_descriptions}

# Image parsing tool
@tool 
def get_md_img_data(img_url: str):
    """Uses AI to analyze images in markdown"""
    try:
        response = request("GET", img_url)
        response.raise_for_status()  # Check if the request was successful
        image_data = base64.b64encode(response.content).decode('utf-8')  # Encode image content to base64
        # Use AI to summarize the content of the image
        response = gpt4o_model.invoke([HumanMessage(
            content=[
                {'type': 'text', 'text': "Summarize the content of the image"},
                {
                    'type': 'image_url',
                    'image_url': {'url': f'data:image/png;base64,{image_data}'},
                },
            ]
        )])
        return response.content
    except Exception as e:
        return f"Error processing the image: {str(e)}"

3. Core Logic of the Agent

tools = [get_cf_data, get_md_img_data]

agent = create_react_agent(
    model=gpt4o_model.bind_tools(tools),
    tools=tools,
    prompt="""
    Assume you are a senior software test engineer, and your task is to read the document provided in CF and outline the test cases.
    Test case generation requirements:
     a. The generated test scenario statements must be fluent and clear, and the semantics and content of different test scenarios must not be repeated.
     b. First, use various case design methods to generate high coverage test cases without missing any requirement details, where caseDesc is the test function point, caseStep is the test operation steps, and expectResult is the expected result of the test. The format for writing cases is:
    {
        "Method": "【Equivalence Class Partitioning / Boundary Value Analysis / Error Guessing / Orthogonal Experiment / or other testing design methods】", "caseDesc": "", "caseStep": "", "expectResult": ""
      }
    c. Then, review the generated cases and use multiple testing classification methods to fill in the missing test scenarios. The format for writing cases is:
    {
        "Method": "【Functional Testing / Interface Testing / Permissions / Security / Performance / or other testing aspects】", "caseDesc": "", "caseStep": "", "expectResult": ""
      }
    d. Strictly follow the case writing format, and do not test unrelated functions at will.
    e. Your response should be a json containing all the above fields only.
    """
)

3. Demonstration of Generated Results

Input Document: PRD containing 5 functional modules (including 3 flowcharts)

Intelligent Output:

Testing Method
Testing Scenario
Expected Result
Boundary Value Analysis
Input 0 participants
Prompt “At least one person must be selected”
Permission Testing
Ordinary employee accessing management functions
Displays “No permission” prompt
Exception Flow Testing
Submit while disconnected from the internet
Data automatically cached for recovery

4. Advanced Developer Tips

  1. Quality Check Loop
    Add result verification nodes to automatically check the completeness of test cases.
  2. Knowledge Base Enhancement
    Integrate historical case libraries for intelligent recommendations.
  3. Manual Review Mechanism
    Push key cases for approval via corporate WeChat.

5. Why Choose LangGraph?

  • 3 Minutes
    Completes a traditional workload of 1 day.
  • Accuracy
    Improves by 40% (compared to manual writing).
  • Flexible Expansion
    Supports integration with Jira/Zentao and other testing management systems.

After debugging, you can see the invocation situation.

Easily Create Test Case Generation AI Agent with LangGraph
Tool Invocation

Leave a Comment