Back to Blog
implementation

Claude for internal knowledge management: a 2026 UK implementation guide

By The AI Consultancy teamPublished Last reviewed
A UK professional in a modern office knowledge centre with floor-to-ceiling reference shelves, a laptop open on the desk, looking up thoughtfully in natural light

What internal knowledge management actually means

Internal knowledge management is the unglamorous use case at the centre of most successful UK Claude rollouts. Staff need to find policy documents, procedure notes, prior client deliverables, training materials, and reference content. The traditional answer is SharePoint or Google Drive search plus institutional memory. The Claude answer is a natural-language assistant that reads across the firm's existing repositories, answers questions in context, cites sources, and respects the access controls already in place.

This article covers what works, what does not, and the operational pattern UK firms actually use in production. It is intended for IT directors, knowledge management leads, and operations leads at UK SMEs and enterprise firms with 50 to 5,000 staff. For the underlying service offer see our Claude implementation page.

The two architectures and when each fits

Internal knowledge management on Claude lands in one of two architectures. Choosing between them is the first scoping decision.

Architecture A: Claude.ai Enterprise with MCP connectors. Claude.ai Enterprise is the user-facing product. Anthropic's Model Context Protocol connectors plug into Microsoft 365 (SharePoint, OneDrive, Outlook) and Google Workspace (Drive, Gmail, Docs) directly. Users ask questions in the Claude.ai interface; Claude reads from the connected repositories, respects per-user permissions, and cites source documents. This is the lowest-friction architecture and works for most UK SMEs whose knowledge lives in standard productivity suites.

Architecture B: Custom RAG application on the Claude API. A custom internal application is built. Documents are ingested into a vector store, retrieval is custom (BM25, vector, hybrid), and Claude answers via the Anthropic API or via Bedrock or Vertex AI. This is the route for firms with bespoke document management systems, strict UK-only data residency requirements that the direct Claude.ai web interface does not meet, or specific audit and traceability requirements that need full custom control.

Most UK SMEs should default to Architecture A. The reasons are practical: faster to ship, lower running cost, lower failure surface, less ongoing engineering. The MCP connector ecosystem now covers the majority of mainstream productivity tools and the gap is narrowing month by month. Architecture B is appropriate where a specific compliance, security, or system-integration requirement rules Architecture A out, not as a default for technical preference.

The ingestion problem

Internal knowledge management projects fail more often on ingestion than on the model. Three issues recur.

Source quality. The firm's documents are out of date, duplicated, or wrong. Claude faithfully reflects whatever it reads. If the policy library is three years old, Claude will give answers that are three years old. There is no AI fix for stale documents; the project has to include a content cleanup phase before or during rollout.

Source coverage. Important institutional knowledge sits in places the connectors do not reach: a one-person Notion workspace, a Trello board, a shared mailbox, a wiki on an internal server. The first version of the deployment misses these and users notice immediately. Map the sources during scoping; do not assume what is in SharePoint and Drive is the full picture.

Sensitive content. The firm's repositories include documents that should not be returned to all staff: HR records, board papers, M&A files, salary information. Either the access controls already in place are correct (in which case Architecture A's per-user permission respect handles it) or they are not (in which case the project has to remediate access controls before AI is added).

The pattern that works is to scope ingestion as its own workstream alongside Claude configuration, run a content audit before going live, and accept that ingestion is an ongoing operational job rather than a one-off project.

UK security and compliance configuration

The configuration steps that should be in place before staff can use the assistant on real internal documents:

SSO. Claude.ai Enterprise integrates with the firm's identity provider (Microsoft Entra ID, Okta, Google Workspace identity). Local Claude accounts should be disabled.

Tenant isolation. Claude.ai Enterprise's data is contractually excluded from training under Anthropic's Data Processing Addendum. Verify the DPA is signed, the data residency setting matches the firm's policy, and the audit log retention period is appropriate.

Connector scoping. Each MCP connector is scoped to a specific source. Apply the principle of least privilege: connect what is needed, not the entire tenant.

Per-user permissions. Claude respects the underlying source's permissions. Users see only the documents they would otherwise see in SharePoint, Drive, or the connected source. Verify this with a test account and a known-restricted document.

Acceptable use policy. Document what staff can and cannot do with the assistant. Common restrictions: do not enter client personal data into prompts (most queries do not require it), do not generate or paste credentials, do not use the AI for decisions that fall under Article 22 UK GDPR without human review.

DPIA. Where the assistant processes personal data of staff, customers, or third parties, complete a DPIA. The ICO has published specific AI DPIA guidance.

Audit trail. Audit logging is on by default in Claude.ai Enterprise. Confirm the firm's data retention policy is reflected, and that audit logs flow to the firm's SIEM where required.

The operational pattern that gets used

Most knowledge management deployments achieve heavy use in week one and decline in weeks two and three because the assistant is configured but not embedded into how people actually work. Three operational habits separate the deployments that stick from the ones that fade.

Embed in the meeting flow. The assistant is used in real meetings to look up policy, find prior work, or check facts in context. This is where the time-saving is most visible. Train teams to ask the assistant during meetings rather than after.

Curate the prompts. Provide a small library of role-specific starting prompts (sales: "find the most recent proposal we sent to a similar customer"; ops: "summarise the relevant section of our supplier policy"; finance: "find the approved expense limits for travel"). These reduce the cold-start friction that kills usage.

Maintain the content. Schedule a quarterly content review: which documents are out of date, which are missing, which sources should be added. Without this, the assistant degrades over time even if Claude itself improves.

The single most important measurement: weekly active users as a percentage of licensed users. A healthy internal knowledge deployment runs at 40 to 60 percent WAU after month three. If the deployment is at 10 to 20 percent WAU, it is failing on adoption regardless of what Claude is technically doing.

Where it works best

Five UK use cases are unusually well-suited to Architecture A internal knowledge management. Each tends to deliver visible adoption and time savings within the first 60 days.

  1. Professional services firms (law, accounting, consulting). Precedent libraries, client-by-client knowledge, and house-style references. The 1M-token context window means Claude can read across an entire matter or engagement file.
  2. Software companies. Engineering documentation, runbooks, design decisions, and post-mortems are exactly the document type Claude handles well, and engineering staff adopt fast.
  3. Multi-location operations. Healthcare groups, retail chains, hospitality groups. The assistant helps front-line staff find policy and procedure across many locations without ringing head office.
  4. Membership and trade bodies. Members ask the assistant to find the relevant guidance, training, or standard from the body's existing materials.
  5. UK public sector. Local authorities, NHS trusts, and central government departments operating within the GOV.UK Claude rollout context. Particular value where staff need to find and cite specific policy or guidance documents.

Where Architecture A is not enough

Architecture A reaches its limits in three contexts.

Bespoke document management systems. Where the document management system is iManage, NetDocuments, OpenText, or a homegrown system without an MCP connector, the integration has to be built. Architecture B becomes the right answer.

Strict UK-only processing. Where the firm's regulatory or contractual obligations require UK-only inference and the direct Claude.ai web tier (still primarily US-processed as of April 2026) does not meet that bar, Architecture B via Bedrock UK South does.

Audit and traceability. Where the use case requires a complete chain-of-evidence trail (which document was retrieved, which chunks were used, which prompt produced the answer), the custom architecture provides full visibility that Claude.ai Enterprise admin telemetry does not.

For firms in these contexts, the tradeoff is more engineering work and higher running cost in exchange for tighter control and integration depth.

Pricing and engagement

Claude internal knowledge management deployments are scoped after a free initial conversation. Architecture A engagements (Claude.ai Enterprise plus MCP connectors) are typically the fastest and cheapest, with phase 1 running over 6 to 10 weeks for a 50 to 200 user firm. Architecture B engagements (custom RAG on Bedrock or Vertex AI) take longer (10 to 20 weeks) and have meaningful recurring infrastructure cost on top of the model cost.

For the broader Claude implementation context see our Claude implementation service. For the underlying token economics see our Claude API implementation cost guide.

Frequently asked questions

What is the difference between Claude.ai Enterprise with MCP and a custom RAG application?
Claude.ai Enterprise with MCP connectors uses Anthropic's pre-built integrations with Microsoft 365, Google Workspace, and a growing list of business systems. It is the lowest-friction route. A custom RAG application uses the Claude API and a custom retrieval layer built over a vector store. It is the route for bespoke document management systems, strict UK-only data residency requirements, or audit and traceability requirements that need full custom control.
How does Claude respect document-level permissions?
Claude.ai Enterprise's MCP connectors honour the underlying source's permissions. A user only sees documents they would otherwise see in SharePoint, OneDrive, or Drive. The trust boundary is the source system's access controls, not Claude. Where source-system controls are wrong, Claude faithfully reflects that, which is why an access-control audit is part of any well-scoped deployment.
Does Claude train on internal documents loaded into Claude.ai Enterprise?
No. Claude.ai Enterprise data is contractually excluded from training under Anthropic's Data Processing Addendum. Verify the DPA is signed and the data residency setting matches the firm's policy. Consumer Claude.ai Free or Pro accounts do not carry the DPA and should not be used on internal confidential content.
How do we measure whether the internal knowledge assistant is working?
Three metrics: weekly active users as a percentage of licensed users (target 40 to 60 percent after month three), task completion rate (does the user act on the answer or repeat the query), and user-reported time saved per week. Below 20 percent WAU after month three indicates an adoption problem regardless of what the model is technically doing. Above 60 percent indicates strong fit and case for expansion.
What ongoing work does the deployment require?
Three habits: a quarterly content review (which documents are out of date, which sources to add), a curated library of role-specific starting prompts to reduce cold-start friction, and ongoing training to embed assistant use in meeting flows rather than treating it as a separate tool. Without these, even technically excellent deployments fade in usage after the first month.

Related Articles

implementation

Rolling out ChatGPT or Claude in a UK SME: the first decisions

implementation

Agentic AI for UK businesses in 2026: costs and use cases

implementation

RAG for UK SMEs: a practical 2026 implementation guide

Ready to explore AI for your business?

Book a free 20-minute consultation. No obligation, no jargon.