Thoughts on the Model Context Protocol Part 2

Valtteri KarestoMay 21, 2025
Since Anthropic first open-sourced the Model Context Protocol (MCP) in late 2024, the community has raised concerns about its transport, security, metadata and adoption. The **2025-03-26 MCP spec update** addresses most of these—but real-world implementations and host support remain in flux. Below is a concise, critical rundown.

Thoughts about MCP Part 2

Since Anthropic first open-sourced the Model Context Protocol (MCP) in late 2024, the community has raised concerns about its transport, security, metadata and adoption. The 2025-03-26 MCP spec update addresses most of these—but real-world implementations and host support remain in flux. Below is a concise, critical rundown.


1. Adoption Is No Longer in Doubt

Then: Would OpenAI or Google ever back MCP?
Now: OpenAI’s Agents SDK, ChatGPT Desktop and API support MCP1, and Google’s Claude team has signaled compatibility2.

Note: It’s still hard to know exactly which host apps fully implement the new spec changes; we expect major clients to publish detailed support matrices soon.


2. Transport Is Cloud-Friendly

Then: MCP’s HTTP + SSE transport forced stateful, long-lived streams—hard to scale on serverless platforms.
Now: A Streamable HTTP transport lets each request stand alone or upgrade to a short-lived stream, making MCP servers deployable as standard HTTP functions on AWS Lambda, Azure Functions, etc., without wrestling with SSE.


3. Built-In Security via OAuth 2.1

Then: No standard auth—each server rolled its own, risking inconsistent or weak security.
Now: The spec mandates an OAuth 2.1 flow for remote MCP servers, so users explicitly grant scopes and tokens before any tool call.

Reality check: As of May 2025, few production implementations exist—how OAuth will look in practice will become clear once servers and host clients roll out support.
Local stdio plugins remain outside this flow, so they still rely on OS-level safeguards.


4. Tool Safety Metadata

Then: Tools lacked formal labels—no way to flag “read-only” vs. “destructive,” risking accidental data loss.
Now: Every tool can declare metadata such as readOnly or destructive in its description (spec changelog PR #185). Hosts can warn users or agents before running high-impact actions.


5. Easier Setup & Deployment

Then: MCP servers were mainly local subprocesses; remote deployment was tricky.
Now: Streamable HTTP plus official SDKs (Python, JavaScript, Java, .NET, Swift, etc.) and reference servers (e.g. Playwright) make both local and cloud-hosted MCP servers straightforward to launch. One-click demos on AWS Lambda abound.


Bottom line: The 2025-03-26 MCP spec has plugged most major gaps—transport, auth, metadata and adoption are all moving forward. Remaining work lies in proper implementation: securing local plugins, enforcing user consent, polishing UIs and tracking which hosts support the new features. MCP’s rapid evolution and heavyweight backing suggest it might become a sound investment for AI-tool interoperability.

Footnotes

  1. https://x.com/OpenAIDevs/status/1904957755829481737

  2. https://x.com/sundarpichai/status/1910082615975313788

Continue reading

Thoughts on the Model Context Protocol

Model Context Protocol (MCP) is a new open protocol that aims to standardize how AI applications connect to custom data sources (files, photos) and tools (eg. functions that can fetch inform from third party systems). It was released by Anthropic, but in theory, any AI application could support it. At the moment, Claude Desktop and Cursor are among the most popular applications that support MCP. This means that third party developers can build custom tools and other capabilities host applications like Claude Desktop can use.

Valtteri Karesto

Valtteri Karesto

CTO, Founder

Synthetic Data in 2025: Revolutionizing GenAI Model Performance

How Synthetic Data is Powering the Next Generation of Efficient, Specialized AI Models

Valtteri Karesto

Valtteri Karesto

CTO, Founder

Optimizing current business vs. unlocking new business opportunities using GenAI

Transformative impact of Generative AI, particularly Large Language Models, on business optimization and the creation of new opportunities, highlighting its applications in various industries like real estate and the importance of exploring beyond current business models to fully leverage AI's potential

Tuomas Martin

Tuomas Martin

VP of Sales

Founder Conversations: A Week of LLM Insights in the Bay Area

We spent a week talking to founders and builders at Ray Summit, TechCrunch Disrupt, and various Bay Area GenAI meetups to understand the challenges they face when building value-providing LLM-based apps.

Joni Juup

Joni Juup

CDO, Founder

The HARG Truth: AI's Need for the Human Element

The Human-Augmented Retrieval Generation (HARG) method builds upon the Retrieval Augmented Generation (RAG) model, integrating a crucial human touch to the pipeline.

Valtteri Karesto

Valtteri Karesto

CTO, Founder

Intentface: Human-Centric Computing Through Intent-Driven Interactions

Imagine a future where clicking dropdowns, filling input fields, and browsing through abstract data visualizations are things of the past. That's the future we can build with intentfaces.

Joni Juup

Joni Juup

CDO, Founder