Skip to main content

AI

Background

At Bitwarden we leverage artificial intelligence tools to enhance developer productivity, improve code quality, and accelerate our development cycles. Our adoption of AI tooling is driven by several key objectives:

Enhanced Developer Productivity: AI assistants help automate repetitive tasks, generate boilerplate code, and provide intelligent code completions, allowing developers to focus on complex problem-solving and architectural decisions.

Code Quality and Consistency: AI tools assist in maintaining coding standards, identifying potential bugs, and suggesting improvements that align with our established best practices and patterns.

Knowledge Sharing: AI assistants serve as intelligent documentation companions, helping developers quickly understand unfamiliar codebases, APIs, and frameworks used across our projects.

Accelerated Onboarding: New team members can leverage AI tools to quickly understand our codebase structure, conventions, and development workflows, reducing the time needed to become productive contributors.

Security-First Approach: We carefully select and configure AI tools that align with our security requirements, ensuring that sensitive code and data remain protected while still benefiting from AI assistance. However, AI tools complement—rather than replace—human oversight and decision-making.

While AI tools enhance developer productivity and help identify potential issues, all code contributions to Bitwarden undergo thorough human review and approval by the Bitwarden engineering team.

Every contribution, whether created with or without AI assistance, must meet strict security and quality standards, align with Bitwarden's core architecture, and be thoroughly tested before being merged.

This ensures that the final decision-making and quality assurance remain firmly in the hands of our security-conscious development team. Contributors can be confident that all merged code has been carefully vetted by the Bitwarden team, regardless of the tools used to create it.

Our primary AI tooling stack centers around Anthropic's Claude, which offers both a powerful language model and flexible integration capabilities through the Model Context Protocol (MCP). This allows us to create custom workflows and integrate with our existing development tools while maintaining control over data privacy and security.

See our Getting Started section for details on how Claude Code and Claude Desktop are used in our development process and specific recommended configuration.

MCP servers

Model Context Protocol (MCP) servers extend Claude's capabilities by providing access to external tools, APIs, and data sources. They enable Claude to interact with your development environment, databases, and other services while maintaining security boundaries.

Understanding MCP servers

MCP servers are separate processes that communicate with Claude through a standardized protocol. They can:

  • Access local file systems and databases
  • Execute commands and scripts
  • Integrate with third-party APIs
  • Provide specialized reasoning capabilities

We currently recommend at least two be installed by everyone:

Best practices

Security considerations:

  • Only install MCP servers from trusted sources
  • Review server permissions and capabilities before installation:
    • Examine the server's source code or documentation to understand what file system access it requires
    • Verify what external APIs or services the server connects to
    • Check if the server executes system commands and understand which ones
    • Confirm whether the server stores persistent data and where it's stored
    • Review network permissions and ensure the server only communicates with expected endpoints
    • Validate that the server follows principle of least privilege
  • Use trusted LLM providers and models:
    • Prefer established providers with strong security track records (e.g., Anthropic)
    • Verify the provider's data handling policies and ensure they align with Bitwarden's security requirements
    • Confirm that your API keys and credentials are stored securely
    • Understand whether your prompts and code are used for model training (opt out if possible)
    • Use enterprise or business tier services when available for enhanced security guarantees
  • Core model usage guidelines:
    • Use the latest stable model versions to benefit from security improvements and bug fixes
    • Avoid deprecated or experimental models in production workflows
    • Be aware of model capabilities and limitations - not all models are suitable for code generation
    • Consider model context windows and token limits when designing workflows
    • Use model-specific features (like Claude's extended thinking) appropriately for complex tasks
    • Monitor model output for hallucinations or incorrect information, especially in security-critical code
  • Regularly update servers to get security patches

Performance optimization:

  • Limit the number of active servers to those you actively use
  • Monitor resource usage, especially for memory-intensive servers
  • Configure appropriate timeouts for long-running operations

Data management:

  • Regularly backup memory server data directories
  • Clear old session data periodically to maintain performance
  • Use project-specific memory contexts when appropriate

Integration with development workflow:

  • Configure project-specific MCP servers in repository .claude/ directories
  • Document custom MCP server requirements in project README files
  • Share MCP configurations with team members for consistency