frkhan commited on
Commit
347d82c
·
1 Parent(s): 4b5a0cc

Enhance project structure and configuration for LangChain agent evaluation

Browse files

- Updated README.md to include project features, installation, and usage instructions.
- Refactored app.py to support model selection and question filtering.
- Introduced configuration files for model and MCP server settings.
- Added Dockerfile and docker-compose.yml for containerized deployment.
- Improved requirements.txt for clarity and consistency.
- Created .dockerignore to exclude unnecessary files from Docker context.

.dockerignore ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .git
2
+ .gitignore
3
+ .venv
4
+ __pycache__
5
+ *.pyc
6
+ *.pyo
7
+ *.pyd
8
+ .idea
9
+ .vscode
10
+ .dockerignore
11
+ Dockerfile
12
+ Dockerfile.dev
13
+ docker-compose.yml
14
+ README.md
Dockerfile ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ RUN apt-get update && apt-get install -y nodejs npm && rm -rf /var/lib/apt/lists/*
6
+
7
+ COPY requirements.txt .
8
+ RUN pip install --no-cache-dir -r requirements.txt
9
+
10
+ RUN npx playwright install chrome
11
+
12
+ COPY . .
13
+
14
+ EXPOSE 7860
15
+
16
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -3,13 +3,101 @@ title: Template Final Assignment
3
  emoji: 🕵🏻‍♂️
4
  colorFrom: indigo
5
  colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 5.25.2
8
- app_file: app.py
 
 
9
  pinned: false
10
  hf_oauth: true
11
  # optional, default duration is 8 hours/480 minutes. Max duration is 30 days/43200 minutes.
12
  hf_oauth_expiration_minutes: 480
13
  ---
14
 
15
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  emoji: 🕵🏻‍♂️
4
  colorFrom: indigo
5
  colorTo: indigo
6
+ # sdk: gradio
7
+ # sdk_version: 5.25.2
8
+ # app_file: app.py
9
+ sdk: docker
10
+ app_port: 7860
11
  pinned: false
12
  hf_oauth: true
13
  # optional, default duration is 8 hours/480 minutes. Max duration is 30 days/43200 minutes.
14
  hf_oauth_expiration_minutes: 480
15
  ---
16
 
17
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
18
+
19
+ # Agent Course Final Assignment by Hugging Face
20
+
21
+ This project contains a Gradio application for evaluating a LangChain agent based on the GAIA (General AI Assistant) benchmark. The agent is designed to answer questions using a variety of tools, and its performance is scored by an external API.
22
+
23
+ ## Features
24
+
25
+ - **Gradio Interface**: An easy-to-use web interface for running the evaluation and viewing the results.
26
+ - **LangChain Agent**: A sophisticated agent built with LangChain, capable of using tools to answer questions.
27
+ - **Multi-Tool Integration**: The agent can interact with multiple tools, such as a browser (via Playwright) and a YouTube transcript fetcher.
28
+ - **Docker Support**: The entire application can be built and run using Docker and Docker Compose, ensuring a consistent environment.
29
+ - **Observability**: Integrated with Langfuse for tracing and monitoring the agent's behavior.
30
+
31
+ ## Installation
32
+
33
+ 1. **Clone the repository:**
34
+
35
+ ```bash
36
+ git clone https://huggingface.co/spaces/hf-agent-course/final-assignment-template
37
+ cd final-assignment-template
38
+ ```
39
+
40
+ 2. **Create a virtual environment and install dependencies:**
41
+
42
+ ```bash
43
+ python -m venv .venv
44
+ source .venv/bin/activate # On Windows, use `.venv\Scripts\activate`
45
+ pip install -r requirements.txt
46
+ ```
47
+
48
+ 3. **Install Playwright browsers:**
49
+
50
+ ```bash
51
+ npx playwright install
52
+ ```
53
+
54
+ 4. **Set up environment variables:**
55
+
56
+ Create a `.env` file in the root of the project and add the following variables:
57
+
58
+ ```
59
+ HF_TOKEN=<your-hugging-face-token>
60
+ GOOGLE_API_KEY=<your-google-api-key>
61
+ LANGFUSE_PUBLIC_KEY=<your-langfuse-public-key>
62
+ LANGFUSE_SECRET_KEY=<your-langfuse-secret-key>
63
+ ```
64
+
65
+ ## Usage
66
+
67
+ To run the Gradio application locally, use the following command:
68
+
69
+ ```bash
70
+ python app.py
71
+ ```
72
+
73
+ This will start a local web server, and you can access the application in your browser at `http://127.0.0.1:7860`.
74
+
75
+ ## Docker
76
+
77
+ This project includes `Dockerfile` and `docker-compose.yml` for running the application in a containerized environment.
78
+
79
+ ### Build and Run with Docker Compose
80
+
81
+ To build and run the application using Docker Compose, use the following command:
82
+
83
+ ```bash
84
+ docker-compose up --build
85
+ ```
86
+
87
+ This will build the Docker image and start the application. You can access the Gradio interface at `http://localhost:7860`.
88
+
89
+ ### Development Environment
90
+
91
+ A `Dockerfile.dev` is also provided for development purposes. To build and run the development environment, use the following command:
92
+
93
+ ```bash
94
+ docker-compose -f docker-compose.yml -f docker-compose.dev.yml up --build
95
+ ```
96
+
97
+ This will mount the local code into the container, allowing for live reloading of changes.
98
+
99
+ ## Contributing
100
+
101
+ Contributions are welcome! Please feel free to submit a pull request or open an issue.
102
+
103
+
app.py CHANGED
@@ -7,6 +7,7 @@ import pandas as pd
7
  from langfuse import observe
8
  from langchain_agent import LangChainAgent
9
  from dotenv import load_dotenv
 
10
 
11
  load_dotenv()
12
 
@@ -14,8 +15,16 @@ load_dotenv()
14
  # --- Constants ---
15
  DEFAULT_API_URL = "https://agents-course-unit4-scoring.hf.space"
16
 
 
 
 
 
 
 
 
 
17
  @observe()
18
- async def run_and_submit_all( profile: gr.OAuthProfile | None):
19
  """
20
  Fetches all questions, runs the BasicAgent on them, submits all answers,
21
  and displays the results.
@@ -42,7 +51,7 @@ async def run_and_submit_all( profile: gr.OAuthProfile | None):
42
  print(f"Error instantiating agent: {e}")
43
  return f"Error initializing agent: {e}", None
44
  # In the case of an app running as a hugging Face space, this link points toward your codebase ( usefull for others so please keep it public)
45
- agent_code = f"https://huggingface.co/spaces/frkhan/hf-agent-course-final-assignment/tree/main"
46
  print(agent_code)
47
 
48
  # 2. Fetch Questions
@@ -66,11 +75,15 @@ async def run_and_submit_all( profile: gr.OAuthProfile | None):
66
  print(f"An unexpected error occurred fetching questions: {e}")
67
  return f"An unexpected error occurred fetching questions: {e}", None
68
 
 
 
 
 
69
  # 3. Run your Agent
70
  results_log = []
71
  answers_payload = []
72
  print(f"Running agent on {len(questions_data)} questions...")
73
- for item in questions_data[:]:
74
  task_id = item.get("task_id")
75
  question_text = item.get("question")
76
  file_name = item.get("file_name")
@@ -99,7 +112,7 @@ async def run_and_submit_all( profile: gr.OAuthProfile | None):
99
  print(f"Error reading file {file_name}: {e}")
100
 
101
  try:
102
- submitted_answer = await agent(question_text)
103
  answers_payload.append({"task_id": task_id, "submitted_answer": submitted_answer})
104
  results_log.append({"Task ID": task_id, "Question": question_text, "Submitted Answer": submitted_answer})
105
  except Exception as e:
@@ -161,6 +174,27 @@ async def run_and_submit_all( profile: gr.OAuthProfile | None):
161
  results_df = pd.DataFrame(results_log)
162
  return status_message, results_df
163
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
164
 
165
  # --- Build Gradio Interface using Blocks ---
166
  with gr.Blocks() as demo:
@@ -171,7 +205,9 @@ with gr.Blocks() as demo:
171
 
172
  1. Please clone this space, then modify the code to define your agent's logic, the tools, the necessary packages, etc ...
173
  2. Log in to your Hugging Face account using the button below. This uses your HF username for submission.
174
- 3. Click 'Run Evaluation & Submit All Answers' to fetch questions, run your agent, submit answers, and see the score.
 
 
175
 
176
  ---
177
  **Disclaimers:**
@@ -182,14 +218,28 @@ with gr.Blocks() as demo:
182
 
183
  gr.LoginButton()
184
 
185
- run_button = gr.Button("Run Evaluation & Submit All Answers")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
 
187
  status_output = gr.Textbox(label="Run Status / Submission Result", lines=5, interactive=False)
188
- # Removed max_rows=10 from DataFrame constructor
189
  results_table = gr.DataFrame(label="Questions and Agent Answers", wrap=True)
190
 
191
  run_button.click(
192
  fn=run_and_submit_all,
 
193
  outputs=[status_output, results_table]
194
  )
195
 
@@ -215,4 +265,4 @@ if __name__ == "__main__":
215
  print("-"*(60 + len(" App Starting ")) + "\n")
216
 
217
  print("Launching Gradio Interface for Basic Agent Evaluation...")
218
- demo.launch(debug=True, share=False, server_name="0.0.0.0")
 
7
  from langfuse import observe
8
  from langchain_agent import LangChainAgent
9
  from dotenv import load_dotenv
10
+ import json
11
 
12
  load_dotenv()
13
 
 
15
  # --- Constants ---
16
  DEFAULT_API_URL = "https://agents-course-unit4-scoring.hf.space"
17
 
18
+ # --- Model Definitions ---
19
+ def load_model_config():
20
+ with open('configurations/app-config.json', 'r') as f:
21
+ config = json.load(f)
22
+ return config.get('model_config', {})
23
+
24
+ AVAILABLE_MODELS = load_model_config()
25
+
26
  @observe()
27
+ async def run_and_submit_all(model_provider: str, model_name: str, selected_questions: list, profile: gr.OAuthProfile | None):
28
  """
29
  Fetches all questions, runs the BasicAgent on them, submits all answers,
30
  and displays the results.
 
51
  print(f"Error instantiating agent: {e}")
52
  return f"Error initializing agent: {e}", None
53
  # In the case of an app running as a hugging Face space, this link points toward your codebase ( usefull for others so please keep it public)
54
+ agent_code = f"https://huggingface.co/spaces/{space_id}/tree/main"
55
  print(agent_code)
56
 
57
  # 2. Fetch Questions
 
75
  print(f"An unexpected error occurred fetching questions: {e}")
76
  return f"An unexpected error occurred fetching questions: {e}", None
77
 
78
+ # Filter questions
79
+ if "All" not in selected_questions:
80
+ questions_data = [q for q in questions_data if q['task_id'] in selected_questions]
81
+
82
  # 3. Run your Agent
83
  results_log = []
84
  answers_payload = []
85
  print(f"Running agent on {len(questions_data)} questions...")
86
+ for item in questions_data:
87
  task_id = item.get("task_id")
88
  question_text = item.get("question")
89
  file_name = item.get("file_name")
 
112
  print(f"Error reading file {file_name}: {e}")
113
 
114
  try:
115
+ submitted_answer = await agent(question_text, model_name, model_provider)
116
  answers_payload.append({"task_id": task_id, "submitted_answer": submitted_answer})
117
  results_log.append({"Task ID": task_id, "Question": question_text, "Submitted Answer": submitted_answer})
118
  except Exception as e:
 
174
  results_df = pd.DataFrame(results_log)
175
  return status_message, results_df
176
 
177
+ def get_questions():
178
+ api_url = DEFAULT_API_URL
179
+ questions_url = f"{api_url}/questions"
180
+ try:
181
+ response = requests.get(questions_url, timeout=15)
182
+ response.raise_for_status()
183
+ questions_data = response.json()
184
+
185
+ formatted_questions = [("All", "All")]
186
+ for index, q in enumerate(questions_data):
187
+ task_id = q.get('task_id')
188
+ question_text = q.get('question', '')
189
+ if task_id is not None:
190
+ label = f"{index + 1} - {question_text[:20]}..."
191
+ print(f"Generated label for task_id {task_id}: {label}") # Debug print
192
+ formatted_questions.append((label, task_id))
193
+
194
+ return formatted_questions
195
+ except Exception as e:
196
+ print(f"Error fetching questions for UI: {e}")
197
+ return [("All", "All")]
198
 
199
  # --- Build Gradio Interface using Blocks ---
200
  with gr.Blocks() as demo:
 
205
 
206
  1. Please clone this space, then modify the code to define your agent's logic, the tools, the necessary packages, etc ...
207
  2. Log in to your Hugging Face account using the button below. This uses your HF username for submission.
208
+ 3. Select the model provider and model to use.
209
+ 4. Select the questions to run (or "All").
210
+ 5. Click 'Run Evaluation & Submit All Answers' to fetch questions, run your agent, submit answers, and see the score.
211
 
212
  ---
213
  **Disclaimers:**
 
218
 
219
  gr.LoginButton()
220
 
221
+ with gr.Row():
222
+ providers = list(AVAILABLE_MODELS.keys())
223
+ default_provider = providers[0] if providers else None
224
+ model_provider_dd = gr.Dropdown(label="Model Provider", choices=providers, value=default_provider)
225
+ model_name_dd = gr.Dropdown(label="Model Name", choices=AVAILABLE_MODELS.get(default_provider, []))
226
+
227
+ def update_models(provider):
228
+ models = AVAILABLE_MODELS.get(provider, [])
229
+ return gr.Dropdown(choices=models, value=models[0] if models else None)
230
+
231
+ model_provider_dd.change(fn=update_models, inputs=model_provider_dd, outputs=model_name_dd)
232
+
233
+ question_selection = gr.CheckboxGroup(label="Select Questions to Run", choices=get_questions(), value=["All"])
234
+
235
+ run_button = gr.Button("Run Evaluation & Submit Selected Answers")
236
 
237
  status_output = gr.Textbox(label="Run Status / Submission Result", lines=5, interactive=False)
 
238
  results_table = gr.DataFrame(label="Questions and Agent Answers", wrap=True)
239
 
240
  run_button.click(
241
  fn=run_and_submit_all,
242
+ inputs=[model_provider_dd, model_name_dd, question_selection],
243
  outputs=[status_output, results_table]
244
  )
245
 
 
265
  print("-"*(60 + len(" App Starting ")) + "\n")
266
 
267
  print("Launching Gradio Interface for Basic Agent Evaluation...")
268
+ demo.launch(debug=False, share=False, server_name="0.0.0.0")
configurations/app-config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_config": {
3
+ "nvidia": [
4
+ "deepseek-ai/deepseek-v3.1",
5
+ "deepseek-ai/deepseek-v3.1-terminus",
6
+ "minimaxai/minimax-m2",
7
+ "mistralai/mistral-nemotron",
8
+ "qwen/qwen3-next-80b-a3b-instruct",
9
+ "qwen/qwen3-next-80b-a3b-thinking",
10
+ "moonshotai/kimi-k2-instruct-0905",
11
+ "nvidia/llama-3.3-nemotron-super-49b-v1.5"
12
+ ],
13
+ "google_genai": [
14
+ "gemini-2.0-flash",
15
+ "gemini-2.0-flash-lite",
16
+ "gemini-2.5-flash",
17
+ "gemini-2.5-flash-lite",
18
+ "gemini-2.5-pro"
19
+ ]
20
+ }
21
+ }
configurations/mcp-server-config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "playwright_mcp":{
3
+ "transport": "stdio",
4
+ "command": "npx",
5
+ "args": [
6
+ "@playwright/mcp@latest",
7
+ "--headless",
8
+ "--isolated",
9
+ "--no-sandbox"
10
+ ]
11
+ },
12
+ "youtube_transcript_mcp":{
13
+ "transport": "stdio",
14
+ "command": "python",
15
+ "args": [
16
+ "mcp-servers/youtube-transcript.py"
17
+ ]
18
+ }
19
+ }
docker-compose.yml ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ services:
2
+ hf-final-assignment-prod:
3
+ build:
4
+ context: .
5
+ dockerfile: Dockerfile
6
+ ports:
7
+ - "12500:7860"
8
+ environment:
9
+ - NVIDIA_API_KEY=${NVIDIA_API_KEY} # Load this key from .env or manually add the secret
10
+ - GOOGLE_API_KEY=${GOOGLE_API_KEY} # Load this key from .env or manually add the secret
11
+ - LANGFUSE_PUBLIC_KEY=${LANGFUSE_PUBLIC_KEY} # Load this key from .env or manually add the secret
12
+ - LANGFUSE_SECRET_KEY=${LANGFUSE_SECRET_KEY} # Load this key from .env or manually add the secret
13
+ - LANGFUSE_HOST=${LANGFUSE_HOST} # Load this key from .env or manually add the secret
14
+ - HF_TOKEN=${HF_TOKEN} # Load this key from .env or manually add the secret
15
+ # volumes:
16
+ # - .:/app
17
+ restart: unless-stopped
18
+ networks:
19
+ - app-network
20
+
21
+ hf-final-assignment-dev:
22
+ build:
23
+ context: .
24
+ dockerfile: Dockerfile
25
+ ports:
26
+ - "12501:7860"
27
+ environment:
28
+ - NVIDIA_API_KEY=${NVIDIA_API_KEY} # Load this key from .env or manually add the secret
29
+ - GOOGLE_API_KEY=${GOOGLE_API_KEY} # Load this key from .env or manually add the secret
30
+ - LANGFUSE_PUBLIC_KEY=${LANGFUSE_PUBLIC_KEY} # Load this key from .env or manually add the secret
31
+ - LANGFUSE_SECRET_KEY=${LANGFUSE_SECRET_KEY} # Load this key from .env or manually add the secret
32
+ - LANGFUSE_HOST=${LANGFUSE_HOST} # Load this key from .env or manually add the secret
33
+ - HF_TOKEN=${HF_TOKEN} # Load this key from .env or manually add the secret
34
+ restart: unless-stopped
35
+ volumes:
36
+ - .:/app
37
+ networks:
38
+ - app-network
39
+
40
+ networks:
41
+ app-network:
42
+ driver: bridge
langchain_agent.py CHANGED
@@ -8,6 +8,7 @@ from langchain_openai import ChatOpenAI
8
  from dotenv import load_dotenv
9
  from langfuse import observe
10
  from langfuse.langchain import CallbackHandler
 
11
 
12
 
13
 
@@ -47,52 +48,32 @@ class LangChainAgent:
47
 
48
 
49
  @observe()
50
- async def __call__(self, question: str) -> str:
51
  print(f"Agent received question (first 50 chars): {question[:50]}...")
52
-
53
- client = MultiServerMCPClient({
54
- "playwright_mcp":{
55
- "transport": "stdio",
56
- "command": "npx",
57
- "args": [
58
- "@playwright/mcp@latest",
59
- # "--headless"
60
- ]
61
- },
62
- "youtube_transcript_mcp":{
63
- "transport": "stdio",
64
- "command": "python",
65
- "args": [
66
- "mcp-servers/youtube-transcript.py"
67
- ]
68
- }
69
- })
70
-
71
- tools = await client.get_tools()
72
- print(tools)
73
-
74
- model_name = "gemini-2.0-flash"
75
- model_provider = "google_genai" #google_genai
76
 
77
- model = init_chat_model(model_name, model_provider=model_provider)
 
78
 
 
79
 
80
- # # model_name = "deepseek-ai/deepseek-v3.1"
81
- # # model_name = "deepseek-ai/deepseek-v3.1-terminus"
82
- # # model_name = "minimaxai/minimax-m2"
83
- # # model_name = "mistralai/mistral-nemotron"
84
- # # model_name = "qwen/qwen3-next-80b-a3b-instruct"
85
- # model_name = "qwen/qwen3-next-80b-a3b-thinking"
86
- # # model_name = "moonshotai/kimi-k2-instruct-0905"
87
- # model_name = "nvidia/llama-3.3-nemotron-super-49b-v1.5"
88
- # # model_provider = "nvidia"
89
-
90
 
91
- # model = ChatOpenAI(
92
- # model=model_name,
93
- # openai_api_key=os.getenv("NVIDIA_API_KEY"),
94
- # openai_api_base="https://integrate.api.nvidia.com/v1"
95
- # )
 
 
 
 
 
 
 
 
 
 
96
 
97
  agent = create_agent(model, tools)
98
 
@@ -103,9 +84,6 @@ class LangChainAgent:
103
  ]
104
  },
105
  config={"callbacks": self.callbacks})
106
-
107
- # print(f"Agent returning answer: {answer}")
108
-
109
 
110
  final_answer = self.extract_final_answer(answer)
111
  print(f"Extracted final answer: {final_answer}")
 
8
  from dotenv import load_dotenv
9
  from langfuse import observe
10
  from langfuse.langchain import CallbackHandler
11
+ import json
12
 
13
 
14
 
 
48
 
49
 
50
  @observe()
51
+ async def __call__(self, question: str, model_name: str, model_provider: str) -> str:
52
  print(f"Agent received question (first 50 chars): {question[:50]}...")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ with open("configurations/mcp-server-config.json", "r") as config_file:
55
+ mcp_config = json.load(config_file)
56
 
57
+ client = MultiServerMCPClient(mcp_config)
58
 
59
+ tools = await client.get_tools()
60
+ print(tools)
 
 
 
 
 
 
 
 
61
 
62
+ if model_provider == "google_genai":
63
+ model = init_chat_model(model_name, model_provider=model_provider)
64
+ elif model_provider == "nvidia":
65
+ model = ChatOpenAI(
66
+ model=model_name,
67
+ openai_api_key=os.getenv("NVIDIA_API_KEY"),
68
+ openai_api_base="https://integrate.api.nvidia.com/v1"
69
+ )
70
+ else:
71
+ # Default to nvidia if provider is not specified
72
+ model = ChatOpenAI(
73
+ model=model_name,
74
+ openai_api_key=os.getenv("NVIDIA_API_KEY"),
75
+ openai_api_base="https://integrate.api.nvidia.com/v1"
76
+ )
77
 
78
  agent = create_agent(model, tools)
79
 
 
84
  ]
85
  },
86
  config={"callbacks": self.callbacks})
 
 
 
87
 
88
  final_answer = self.extract_final_answer(answer)
89
  print(f"Extracted final answer: {final_answer}")
requirements.txt CHANGED
@@ -1,17 +1,11 @@
1
- gradio==5.49.1
2
- requests
3
- gradio[oauth]
4
  langfuse==3.8.1
5
- # smolagents[mcp]
6
- # smolagents[openai]
7
- # langchain-core==1.0.2
8
  langchain-mcp-adapters==0.1.12
9
  mcp==1.20.0
10
  langchain-google-genai==3.0.0
11
- # langchain-nvidia-ai-endpoints==0.3.19
12
  langchain==1.0.3
13
  openai==2.6.1
14
  langchain-deepseek==1.0.0
15
  langchain-openai==1.0.2
16
- youtube-transcript-api==1.2.3
17
-
 
1
+ requests==2.32.5
2
+ gradio[oauth]==5.49.1
 
3
  langfuse==3.8.1
 
 
 
4
  langchain-mcp-adapters==0.1.12
5
  mcp==1.20.0
6
  langchain-google-genai==3.0.0
 
7
  langchain==1.0.3
8
  openai==2.6.1
9
  langchain-deepseek==1.0.0
10
  langchain-openai==1.0.2
11
+ youtube-transcript-api==1.2.3