Using LangChain4j in Micronaut
LangChain4j is a Java library that simplifies working with LLMs, enabling us to create AI-driven applications with minimal complexity. By leveraging LangChain4j, we can build applications that interact with LLMs for tasks such as natural language processing, chatbot development, and information retrieval. Micronaut, on the other hand, is a lightweight and cloud-native Java framework designed for building microservices and serverless applications. It provides fast startup times, low memory footprint, and strong support for reactive programming, making it an excellent choice for modern application development. This article explores the integration of LangChain4j with Micronaut for building AI-powered applications.
1. Setting Up a Micronaut Project with Maven
To get started, create a new Micronaut application using the Micronaut CLI:
1 | mn create-app com.jcg.micronaut-ai --build=maven |
Navigate to the project directory:
1 | cd micronaut-ai |
2. Dependencies
LangChain4j in Micronaut requires an annotation processor in the Maven compiler plugin configuration. The annotation processor in LangChain4j for Micronaut is responsible for generating the necessary implementation classes for AI service interfaces annotated with @AiService
.
2.1 Why Is the Annotation Processor Needed?
Micronaut uses compile-time dependency injection rather than runtime reflection to improve performance and reduce memory usage. The LangChain4j annotation processor generates implementation code for interfaces marked with @AiService
, allowing Micronaut to wire them into the application automatically.
2.1.1 How Does It Work?
When you define an interface like this:
1 2 3 4 5 6 | @AiService public interface MovieRecommendationService { @SystemMessage ( "You are a movie expert. Recommend a movie based on the given genre." ) String recommendMovie(String genre); } |
Micronaut’s annotation processor compiles the interface into a concrete implementation that integrates with LangChain4j, handling user queries, injecting system messages (@SystemMessage
), and managing model responses, ensuring MovieRecommendationService
functions correctly.
2.2 Configuring the Annotation Processor in Maven
To enable this behaviour, you need to specify the annotation processor in the Maven compiler plugin inside your pom.xml
:
01 02 03 04 05 06 07 08 09 10 11 12 | < plugin > < groupId >org.apache.maven.plugins</ groupId > < artifactId >maven-compiler-plugin</ artifactId > < configuration > < annotationProcessorPaths > < path > < groupId >io.micronaut.langchain4j</ groupId > < artifactId >micronaut-langchain4j-processor</ artifactId > </ path > </ annotationProcessorPaths > </ configuration > </ plugin > |
This ensures that the micronaut-langchain4j-processor
runs at compile time, generating necessary classes.
The next step is to configure a Chat Language Model. Ollama is an AI framework designed for running and managing local LLMs (Large Language Models) on personal devices or servers. It provides an easy way to download, run, and interact with various open-source LLMs, making them accessible offline. It can be downloaded and installed from the official source.
Ollama allows users to deploy chat-optimized LLMs like LLaMA 2, Mistral, or Orca Mini, enabling conversational AI without relying on cloud-based APIs. This article uses the orca-mini model.
1 | ollama run orca-mini |
After downloading and installing the model, verify its availability using the ollama list
command, and once the server starts, you can check its status at http://localhost:11434, and confirm that Ollama is running.
Next, add the following necessary dependencies:
1 2 3 4 5 6 7 8 | < dependency > < groupId >io.micronaut.langchain4j</ groupId > < artifactId >micronaut-langchain4j-core</ artifactId > </ dependency > < dependency > < groupId >io.micronaut.langchain4j</ groupId > < artifactId >micronaut-langchain4j-ollama</ artifactId > </ dependency > |
The micronaut-langchain4j-core
dependency enables Micronaut’s integration with LangChain4j, handling AI-related annotations and service generation. The micronaut-langchain4j-ollama
dependency adds support for locally hosted LLMs via Ollama, allowing our Micronaut applications to interact with models like LLaMA 2 and Orca Mini without external APIs.
3. Configuring LangChain4j
In src/main/resources/application.properties
, configure LangChain4j with Ollama:
1 2 3 | langchain4j.ollama.base-url=http://localhost:11434 langchain4j.ollama.model-name=orca-mini langchain4j.ollama.timeout=600s |
4. Implementing the AI Service
Integrating AI-powered recommendations into a Micronaut application enables users to receive concise and insightful responses. By leveraging LangChain4j with Micronaut, we can define an AI service that generates tailored suggestions based on user input. The following interface defines a TravelGuide
service that provides a brief guide for a given destination.
1 2 3 4 5 6 7 8 9 | @AiService public interface TravelGuide { @SystemMessage ( "" " You are a travel expert. Provide a short travel guide for the given destination in at most 3 sentences. "" ") String recommendTravel(String destination); } |
In this code, the @AiService
annotation marks TravelGuide
as an AI-based service, while the @SystemMessage
annotation instructs the AI model to behave as a travel expert. The method recommendTravel(String destination)
accepts a location as input and returns a concise travel guide with at most three sentences. This setup ensures that the AI-generated response is structured and informative while keeping it concise.
5. Creating a Micronaut Controller
Now, we expose the AI service via a Micronaut REST controller.
01 02 03 04 05 06 07 08 09 10 11 | @Controller ( "/travel" ) public class TravelController { @Inject private TravelGuide travelGuide; @Get ( "/recommend" ) public String recommendTravel( @QueryValue String destination) { return travelGuide.recommendTravel(destination); } } |
This controller listens for GET
requests at /travel/recommend
and invokes the AI service to generate a short travel guide based on the provided destination.
5.1 Running the Application
Start the Micronaut application:
1 | mvn mn:run |
Once the server is running, test the API using curl
:
Response Example/Sample Output:
6. Testing Micronaut LangChain4j with Ollama
To ensure our AI-powered TravelGuide
service works correctly, we need to write tests that validate its functionality. Micronaut provides built-in support for testing using @MicronautTest
. Include the following dependency in your pom.xml
to enable testing with Ollama:
1 2 3 4 5 | < dependency > < groupId >io.micronaut.langchain4j</ groupId > < artifactId >micronaut-langchain4j-ollama-testresource</ artifactId > < scope >test</ scope > </ dependency > |
This dependency provides a test resource for integrating Ollama with Micronaut LangChain4j, enabling automated testing of our AI-powered services without requiring a live model during development.
6.1 Writing a Micronaut Test Case
Below is a test class that verifies whether our AI-powered movie recommendation service generates valid responses:
01 02 03 04 05 06 07 08 09 10 11 12 13 | @MicronautTest class TravelGuideTest { private static final Logger logger = LoggerFactory.getLogger(TravelGuideTest. class ); @Test void testTravelGuideService(TravelGuide travelGuide) { String result = travelGuide.recommendTravel( "Paris" ); logger.info( "AI Response: {}" , result); assertNotNull(result); } } |
This test verifies that the TravelGuide
AI service returns a non-null response when asked for a travel guide for Paris. If everything is configured correctly, the test will pass, and you will see an AI-generated movie recommendation in the console.
7. Conclusion
In this article, we explored how to integrate LangChain4j with Micronaut, configure Ollama, and build an AI-powered service. With the provided setup, we can interact with locally hosted language models in our applications.
8. Download the Source Code
This article covered Micronaut LangChain4j.
You can download the full source code of this example here: micronaut langchain4j