Quickly Build an Agent with Llama-Index

Quickly Build an Agent with Llama-Index

Meow! In the previous article, we used Tongyi Qianwen to create an intelligent customer service agent with four major functions through four system-level prompts. This article will build an upgraded agent based on calling Tongyi Qianwen and combining it with Llama-Index. First, let’s implement the simplest example using ReActAgent and Functional Tool to create a … Read more

Running Phi-4 Model with Ollama and Python Calls

Running Phi-4 Model with Ollama and Python Calls

## Install Ollama Select the appropriate installation method based on your operating system, taking Linux CentOS as an example. Use the command `curl -fsSL https://ollama.com/install.sh | sh` to install (requires sudo privileges). After installation, you can verify if it was successful by running `ollama –version`. ### Start Ollama After successful installation, start the Ollama service … Read more

Building Local Network Search Agents with Phidata and Ollama

Building Local Network Search Agents with Phidata and Ollama

Background: Attempting to build search Agents based on a local Agent framework. Reference Website: https://docs.phidata.com/tools/website Basic Environment: Command line tools (Linux/Mac), python3 (set up an independent conda environment). Basic LLM: Download and install from the Ollama official website (if you have a ChatGPT membership, you can also use ChatGPT). AI Agent Framework: This time we … Read more

Automating IT Interviews with Ollama and Python Audio Features

Automating IT Interviews with Ollama and Python Audio Features

Are you still troubled by the mixed quality and poor performance of domestic AI? Then let’s take a look at Dev Cat AI (3in1)! This is an integrated AI assistant that combines GPT-4, Claude3, and Gemini. It covers all models of these three AI tools. Including GPT-4o and Gemini flash Now you can own them … Read more

Evaluate Stock Technical Indicators Using Ollama

Evaluate Stock Technical Indicators Using Ollama

This article has several interesting points: 1. Visualization using Streamlit. 2. Calculating rolling averages and momentum indicators to understand market trends. 3. Using Llama 3 to interpret the data. First, install and import the following packages: import yfinance as yf import pandas as pd import schedule import time import ollama from datetime import datetime, timedelta … Read more

How to Deploy Private Free Large Models Locally with Ollama

How to Deploy Private Free Large Models Locally with Ollama

Click below 👇“AI Knowledge Exchange”Follow the official account Ollama is an open-source framework designed for the convenient deployment and operation of large language models (LLMs) on local machines. Its core feature is to simplify usage and provide an efficient technical architecture, allowing developers to easily access and use powerful AI language models. Ollama supports local … Read more

Windsurf: A Powerful Tool for API Automation Testing

Windsurf: A Powerful Tool for API Automation Testing

I tried using Windsurf to write code for API automation testing and experienced its convenience and efficiency. Windsurf does not require high coding skills from users, while the accuracy of the generated code is relatively high. Moreover, it excels in generating test case scenario coverage. Once the code is completed, Windsurf can also automatically generate … Read more

Local Invocation of Llama3 Large Model Development

Local Invocation of Llama3 Large Model Development

1. Test using the trained weights from transformers import AutoModelForCausalLM,AutoTokenizer,TextGenerationPipeline import torch tokenizer = AutoTokenizer.from_pretrained(r"E:\大模型AI开发\AI大模型\projects\gpt2\model\models–uer–gpt2-chinese-cluecorpussmall\snapshots\c2c0249d8a2731f269414cc3b22dff021f8e07a3") model = AutoModelForCausalLM.from_pretrained(r"E:\大模型AI开发\AI大模型\projects\gpt2\model\models–uer–gpt2-chinese-cluecorpussmall\snapshots\c2c0249d8a2731f269414cc3b22dff021f8e07a3") # Load our own trained weights (Chinese poetry) model.load_state_dict(torch.load("net.pt")) # Use the system's built-in pipeline tool to generate content pipline = TextGenerationPipeline(model,tokenizer,device=0) print(pipline("天高", max_length=24)) The performance is actually not good: 2. Post-process the AI-generated results # Customized … Read more