Install and Configure n8n for AI Workflow Automation on Ubuntu Cloud GPU
n8n is an open-source workflow automation platform that allows you to design and automate workflows by connecting nodes representing services and actions. It combines a visual interface with custom code, enabling developers to create multi-step AI agents that integrate seamlessly with existing applications.
n8n offers several powerful and flexible automation capabilities:
- Event-Driven Execution: Workflows can run based on specific events, such as webhooks, enabling real-time system automation.
- Visual Workflow Builder: A web-based management interface lets you build workflows by connecting nodes representing triggers, actions, and integrations.
- Extensive Integration Library: n8n supports over 350 built-in services, including GitHub, Slack, Google Workspace, and Discord.
- Native JavaScript Support: You can insert custom JavaScript to create advanced automation logic.
This guide explains how to set up n8n on a Cloud GPU server, integrate AI-powered workflows, and deploy self-hosted AI agents using Ollama.
Prerequisites
- Deploy a Cloud GPU VM running Ubuntu 24.04 and a GPU of your choice.
- Access the server via SSH.
- Create a non-root user with sudo privileges.
- Install n8n and configure it with a custom domain name, e.g., n8n.example.com.
Access and Setup n8n
To access and configure the n8n dashboard, follow these steps:
Open a browser and navigate to:
Enter your email, name, and password to create the first admin user. Then add your company details to complete setup.
Next, request a free n8n license key via email and activate it:
- Click your username in the bottom-left corner and open Settings.
- Select Enter activation key, paste your license key, and click Activate.
Create Basic Workflows Using n8n
Follow these steps to create a simple HTTP workflow using n8n.
Navigate to Overview → click Create Workflow → Add first step.
Search for Webhook and select it. Configure the node as follows:
- Set the HTTP Method to GET.
- Change the Path to
/greetings. - Set the Respond method to “Using Respond to Webhook Node.”
Return to the canvas, add a new node, and choose Respond to Webhook. Configure it to return a JSON message:
{
"message": "Greetings from centron! The workflow is successfully executed",
"status": "success",
"timestamp": "{{ $now }}"
}
Activate the workflow, click Execute Workflow, and copy the generated webhook URL. From your terminal, run:
$ curl https://n8n.example.com/webhook-test/greetings
The response should confirm a successful workflow execution.
Install Ollama for Local AI Model Execution
Ollama is an open-source framework for running large language models (LLMs) locally. It provides OpenAI-compatible endpoints that integrate easily with n8n.
Check your GPU status to ensure the system is ready for accelerated inference:
$ nvidia-smi
Run Ollama with Docker and GPU support:
$ sudo docker run -d --gpus=all -v ollama:/app/.ollama -p 11434:11434 --name ollama ollama/ollama
Verify the installed Ollama version:
$ sudo docker exec -it ollama ollama --version
Download a model such as gpt-oss:20b for local AI inference:
$ sudo docker exec -it ollama ollama pull gpt-oss:20b
List all available models and open the firewall for port 11434:
$ sudo ufw allow 11434/tcp
$ sudo ufw reload
Create AI Workflows in n8n
In this section, you’ll create an AI agent in n8n that connects to Ollama for intelligent automation.
In the n8n interface:
- Go to Overview → Create New Workflow.
- Add a Chat Trigger node.
- Add an AI Agent node and link it to the Chat Trigger.
- Select Ollama Chat Model and configure credentials with your server’s IP and port 11434.
- Add a Simple Memory node for prompt storage and a Calculator tool node for arithmetic functions.
Activate the workflow and click Open Chat. Type a question such as:
What is 3345 multiplied by 17, and divided by 5?
The AI agent will calculate and return the answer, while logging the conversation and output for review.
Conclusion
You have successfully installed n8n, integrated Ollama for local AI processing, and built intelligent workflows that automate responses using GPU acceleration. n8n provides a robust framework to design scalable, event-driven, and AI-enhanced automation pipelines. Explore additional integrations and templates in the n8n documentation to expand your automation capabilities.


