My weekend coding sprint for AIHelperLibrary v1.1.0 started with a GitHub issue I'd been putting off: 'Add support for Anthropic Claude models.' What seemed like a simple feature request quickly evolved into a major architectural overhaul. Thirty-six hours and about a dozen coffee refills later, I had transformed a simple OpenAI wrapper into a true multi-provider solution that now handles both OpenAI and Anthropic APIs with the same clean interface.
Why Add Claude Support?
There were a few reasons I decided to prioritize Claude integration. First, I've been testing Claude models myself over the past months and have been impressed with their capabilities, especially for more nuanced or complex instructions. Second, I noticed that many developers in our small but growing community were implementing crude hacks to make AIHelperLibrary work with Claude—clearly indicating a need. Finally, I believe in architectural diversity when it comes to AI providers; giving developers options helps them build more resilient applications.
The question wasn't whether to add Claude support, but how to do it without completely breaking the existing API that developers were already using. That's where the real challenge (and fun) began.
Rethinking the Core Architecture
When you're expanding a library from one provider to multiple providers, you need to take a step back and rethink your approach. I started by creating a few new abstractions that would allow for a consistent interface while accommodating provider-specific behaviors:
IAIClient: The main interface that both OpenAI and Claude clients would implement.
IAIModel: An interface allowing each provider to map their own model identifiers.
AIBaseConfiguration: A base class for provider-specific configurations.
AIProviderFactory: A factory pattern implementation to instantiate the right client based on configuration.
This meant rewriting a significant portion of the codebase, but I was determined to maintain backward compatibility. Developers already using the library shouldn't have to change their code unless they wanted to take advantage of the new multi-provider features.
// Before v1.1.0
var client = new OpenAIClient(apiKey, config);
var response = await client.GenerateTextAsync("Your prompt here");
// After v1.1.0 - still works exactly the same
var client = new OpenAIClient(apiKey, config);
var response = await client.GenerateTextAsync("Your prompt here");
// New factory approach for multiple providers
var factory = new AIProviderFactory();
var openAIClient = factory.CreateClient(openAIApiKey, openAIConfig);
var claudeClient = factory.CreateClient(claudeApiKey, claudeConfig);
// Both implement IAIClient and work with the same methods
var response = await client.GenerateTextAsync("Your prompt here");
The Challenge of Provider-Specific Quirks
One of the more interesting challenges was dealing with the subtle differences between how each provider's API behaves. For instance, OpenAI and Claude have different request formats, parameter names, and response structures. Claude uses a 'messages' format for everything, while older OpenAI models still use the 'prompt' parameter in some cases.
But the trickiest part was handling OpenAI's newer o-series models like o1, o3-mini, and o4-mini. These models have unique requirements—they don't accept 'temperature' or 'top_p' parameters, and instead of 'max_tokens' they use 'max_completion_tokens'. I spent a good chunk of time building intelligent parameter handling that automatically formats requests according to each provider's specific needs:
// In OpenAIClient.cs
private object BuildRequestBody(string prompt)
{
var modelId = GetModelString();
bool isChat = OpenAIModelHelper.IsChatModel(modelId);
bool isOModel = modelId.StartsWith("o1") || modelId.StartsWith("o3") || modelId.StartsWith("o4");
if (isChat)
{
var baseChatRequest = new
{
model = modelId,
messages = new[] { new { role = "user", content = prompt } }
};
if (isOModel)
{
return new
{
baseChatRequest.model,
baseChatRequest.messages,
max_completion_tokens = _config.MaxTokens
};
}
else
{
return new
{
baseChatRequest.model,
baseChatRequest.messages,
max_tokens = _config.MaxTokens,
temperature = _config.Temperature,
top_p = _config.TopP
};
}
}
// Non-chat model handling...
}
This kind of automatic parameter adaptation means developers don't have to worry about these details—the library handles them transparently.
Enterprise-Ready Features
Beyond multi-provider support, I wanted to make the library more robust for production use. Many of these features came from my own frustrations with using AI APIs in corporate environments:
Custom header support: Useful for adding organization IDs or custom tracking.
Configureable proxy settings: Essential for companies with restricted outbound connections.
Robust retry logic: With exponential backoff for rate limits and transient errors.
Dynamic prompt management: Template-based prompting with variable replacement.
These features might seem minor, but they're the difference between a library that works in a demo and one that works reliably in production. I've been on both sides of that divide, and I wanted AIHelperLibrary to be production-ready from day one.
Testing Across Providers
Testing multi-provider code presents unique challenges. Each provider has different models, rate limits, and response formats. I had to be careful about not burning through my API credits while ensuring everything worked correctly.
I ended up building a small console test app that can switch between providers and models. This made it easy to verify that the same code path produced consistent results regardless of the backend provider. It was during this testing that I discovered several edge cases where responses from different models varied just enough to cause parsing issues.
// Test app selection process
DisplayMainMenu();
var choice = Console.ReadLine()?.Trim();
switch (choice)
{
case "1":
await RunCustomPromptTest();
break;
case "2":
await RunPredefinedPromptTest();
break;
case "3":
await RunDynamicPromptTest();
break;
case "4":
await RunChatTest();
break;
// ...
}
Lessons Learned
This update reinforced something I already knew but often forget: good abstractions make everything else easier. The initial design decisions I made in v1.0 created a solid foundation that made this expansion possible without breaking changes.
It also reminded me of the importance of testing with real-world scenarios. Some of the bugs I fixed wouldn't have been obvious from just reading the API documentation—they only appeared when actually using different models with complex prompts.
Plan for expansion: Even if you start with one use case, design your code to accommodate future growth.
Abstract at the right level: Too little abstraction makes expansion hard; too much makes the code confusing.
Test with real APIs: Mocks are useful but nothing beats testing against the actual services.
Document as you go: I updated the docs in parallel with the code, which helped catch inconsistencies early.
The Road Ahead
With v1.1.0 released, I'm already thinking about what's next for AIHelperLibrary. Some ideas on my roadmap include:
Streaming support: Real-time token streaming for more responsive UIs.
Function calling: Adding support for OpenAI's function calling capability.
Local LLM support: Adapters for self-hosted models like LM Studio or Ollama.
Content filtering options: More granular control over AI-generated content.
But before diving into any of those, I'm going to take a step back and gather feedback from the community. The best features often come from real users with real problems to solve.
Contributing to AIHelperLibrary
If you're using AIHelperLibrary or interested in contributing, check out the GitHub repository. I've tagged a few 'good first issue' tickets for anyone looking to get involved.
And if you've built something interesting with the library, I'd love to hear about it! Tag me on Twitter or open a discussion on GitHub. Seeing how people use the tools I build is what makes open source development so rewarding.
Final Thoughts
Building v1.1.0 was more challenging than I expected, but also more satisfying. There's something rewarding about refactoring code to be more flexible and powerful while maintaining backward compatibility. It's the kind of task that reminds me why I love software engineering in the first place.
Whether you're integrating OpenAI, Claude, or (eventually) other providers, I hope AIHelperLibrary makes your AI integration journey a little smoother. After all, that's what good abstractions are for—they let you focus on building amazing experiences without getting bogged down in API details.