AI & Automation

  • Token Splitter

    Token Splitter

    The Token Splitter node in n8n divides text into chunks based on token count rather than character count. This distinction matters because large language models process and bill by tokens, not characters. By splitting on token boundaries, you get precise control over how much content you send to an AI model in each request, which directly affects both cost and output quality. Token-based splitting is essential when building retrieval-augmented generation (RAG) pipelines, processing long documents through AI models, or preparing text for embedding generation. If you split by characters, you might accidentally cut through the middle of a token, which can produce garbled embeddings or incomplete context. The Token Splitter avoids this by respecting the tokenisation rules of the model you are targeting. This node works hand-in-hand with vector store nodes and summarisation chains. You feed it a long document, it breaks it into token-counted chunks with configurable overlap, and each chunk flows downstream for embedding, summarisation, or classification. The overlap setting ensures important context at chunk boundaries is not lost, which improves retrieval accuracy in search-based workflows. If your team is building AI workflows that process documents and you need help getting the chunking strategy right, our AI consultants can advise on the best approach for your specific data types and use cases. Chunking strategy has a measurable impact on the quality of custom AI systems.
  • Pinecone: Insert

    Pinecone: Insert

    The Pinecone Insert node in n8n writes vector embeddings into a Pinecone index, which is one of the most widely used managed vector databases for AI applications. Once your text has been chunked, embedded, and inserted into Pinecone, you can perform fast semantic searches across millions of vectors. This node handles the insertion step, making it straightforward to keep your vector index up to date as part of an automated pipeline. Pinecone is purpose-built for similarity search at scale. Unlike the in-memory vector store, Pinecone persists your data across workflow runs, supports concurrent access from multiple applications, and can handle datasets that would not fit in memory. The Insert node lets you push new embeddings into your index whenever new data arrives — whether that is new documents, updated product descriptions, or fresh support articles. Teams building production retrieval-augmented generation (RAG) systems typically use Pinecone as their vector store because it handles the infrastructure complexity. You do not need to manage servers, tune indexes, or worry about scaling. The n8n integration means you can automate the entire pipeline: ingest data, chunk it, embed it, and insert it into Pinecone without writing custom code. Our medical document classification project used a similar approach to index and retrieve clinical documents at scale. If you are building a vector search system and need help with architecture decisions, our AI agent development and system integration teams can design a pipeline that scales with your data.
  • Facebook Lead Ads Trigger

    Facebook Lead Ads Trigger

    The Facebook Lead Ads Trigger node in n8n fires your workflow automatically whenever a new lead is captured through a Facebook or Instagram lead ad form. Instead of manually checking your ad account or waiting for email notifications, leads flow directly into your automation pipeline the moment a prospect submits their details. This eliminates the delay between lead capture and follow-up, which is critical for conversion rates. Speed of response is one of the strongest predictors of whether a lead converts. Research consistently shows that responding within minutes rather than hours dramatically improves contact rates. By triggering an n8n workflow instantly when a lead comes in, you can send a personalised acknowledgement email, add the contact to your CRM, notify your sales team via Slack, and enrich the lead data with additional information — all within seconds of form submission. This node is especially valuable for businesses running lead generation campaigns at scale. When you are spending budget on Facebook and Instagram ads, you need every lead handled promptly and consistently. Manual processes break down as volume increases, and leads slip through the cracks. An automated pipeline ensures no lead is missed and every prospect gets the same quality of initial engagement. Our sales automation services help businesses build exactly these kinds of pipelines. If your team is running paid lead generation campaigns and wants to maximise return on ad spend through faster follow-up, our consulting team can design a lead processing workflow that integrates with your CRM, email platform, and sales tools.
  • Recursive Character Text Splitter

    Recursive Character Text Splitter

    The Recursive Character Text Splitter node in n8n breaks long text documents into smaller chunks by recursively splitting on natural text boundaries — paragraphs first, then sentences, then words. This hierarchical approach produces cleaner chunks than fixed-length splitting because it respects the structure of the original text. The result is chunks that are more coherent and more useful for downstream AI processing. When preparing documents for embedding generation, summarisation, or classification, chunk quality directly impacts the quality of your results. Splitting in the middle of a sentence produces fragments that lose meaning in isolation. The recursive approach avoids this by trying the largest separators first (double newlines for paragraphs) and only falling back to smaller separators (single newlines, spaces) when a chunk still exceeds the target size. This gives you the best balance between chunk size consistency and content coherence. This node is a fundamental building block in retrieval-augmented generation (RAG) pipelines. After splitting, each chunk is typically passed through an embedding model and stored in a vector database for semantic search. The quality of your splits directly affects retrieval precision — well-formed chunks that represent complete thoughts or sections produce more relevant search results. Our patient data entry automation project used careful text processing to extract and classify medical information accurately. If you are building document processing workflows and want guidance on chunking strategies for your specific data, our AI consulting team and data processing specialists can help you design an approach that maximises the quality of your AI outputs.
  • Simulate Trigger

    The Simulate Trigger node in n8n lets you test workflows without needing to wait for real events to occur. It generates mock trigger data that mimics what a real trigger node would produce, so you can build, debug, and validate your workflow logic before connecting it to live data sources. This is an essential tool for any team developing n8n automations professionally. When building a workflow that responds to webhooks, scheduled events, or third-party triggers, you often cannot control when real data arrives. The Simulate Trigger removes this dependency by letting you define sample payloads and fire them on demand. This speeds up development significantly because you can test your entire downstream logic — data transformations, conditional routing, API calls, and output formatting — without waiting for real triggers. The node is particularly valuable during the development phase of complex automation projects. Rather than deploying a half-finished workflow and hoping the right data comes through to test each branch, you can systematically test every path with controlled inputs. This approach catches edge cases early and produces more reliable automations. Teams that follow disciplined testing practices build workflows that work correctly from day one in production. If your business is investing in business automation and wants to ensure your workflows are thoroughly tested before going live, our n8n consulting team follows structured development and testing practices that minimise production issues.
  • DebugHelper

    DebugHelper

    The DebugHelper node in n8n is a utility tool that helps you inspect, log, and troubleshoot data flowing through your workflows. When automations misbehave or produce unexpected results, the DebugHelper lets you pause execution, examine the data at any point in the pipeline, and understand exactly what each node is receiving and outputting. It is the equivalent of placing a breakpoint in your code. Debugging automation workflows can be frustrating without proper tooling. Data transformations, conditional logic, and API responses can produce subtle issues that are difficult to trace by looking at final outputs alone. The DebugHelper node gives you visibility into intermediate states, so you can pinpoint exactly where data goes wrong. This is especially useful for complex workflows with multiple branches, loops, or nested sub-workflows. In practice, the DebugHelper is most useful during development and troubleshooting. You drop it between two nodes to inspect the data structure, verify field names and values, check for null or missing fields, and confirm that transformations are producing the expected format. Once you have resolved the issue, you can remove the node or leave it disabled for future troubleshooting sessions. Building maintainable automations requires good debugging practices from the start. If your team is developing n8n workflows and wants help establishing business automation best practices, our n8n consultants bring production experience that helps you avoid common pitfalls and build workflows that are easier to maintain long-term.
  • Microsoft Outlook Trigger

    Microsoft Outlook Trigger

    The Microsoft Outlook Trigger node in n8n starts your workflow automatically when specific events occur in a Microsoft Outlook mailbox — most commonly when a new email arrives. This lets you build automated email processing pipelines that respond to incoming messages in real time, routing them to the right systems and people without manual intervention. Email remains one of the primary channels through which business information flows. Invoices arrive as attachments, customer enquiries land in shared inboxes, approval requests come through as formatted messages, and reports are delivered on schedule. The Outlook Trigger captures these events and feeds the email data — subject, body, sender, attachments, and metadata — directly into your n8n workflow for automated processing. Common automation patterns include extracting invoice data from email attachments and pushing it to accounting systems, routing customer enquiries to the appropriate team based on subject line or content analysis, saving attachments to cloud storage with proper naming and folder structure, and triggering approval workflows when specific types of emails arrive. Our patient data entry project automated the processing of incoming medical documents that arrived via email, eliminating hours of manual data entry. If your team spends significant time processing emails manually and wants to explore robotic process automation for your inbox workflows, our consulting team can design an automation that handles the repetitive work so your staff can focus on tasks that need human judgement.
  • Custom Code Tool

    Custom Code Tool

    Custom Code Tool is an n8n node that lets you write JavaScript or Python code and expose it as a tool your AI agent can call. While n8n provides built-in tools for common tasks like web search and calculations, many business processes require custom logic that no pre-built node covers. This node bridges that gap — you write the specific function your agent needs, and it becomes a callable tool within the agent workflow. The use cases are broad. You might write a custom tool that validates Australian Business Numbers (ABNs), formats data according to your company standards, queries a proprietary internal API, or applies business rules that are too specific for generic nodes. The agent receives a description of what your custom tool does, decides when to call it based on the task at hand, and uses the returned result in its reasoning and output. At Osher Digital, Custom Code Tool is where we implement the business-specific logic that makes each AI agent project unique to the client. In our BOM weather data pipeline, custom code handled the specific data transformation logic that no standard node could. For custom AI development and system integrations, this node is often the key to connecting AI agents with legacy systems or proprietary data formats. If you have unique business logic that needs to be accessible to an AI agent, our n8n consulting team can build and test custom tools that fit your exact requirements.
  • In Memory Vector Store Load

    In Memory Vector Store Load

    The In Memory Vector Store Load node in n8n lets you load documents into a temporary vector store that lives entirely in your workflow’s runtime memory. This is particularly useful when you need to perform semantic search or retrieval-augmented generation (RAG) across a smaller dataset without the overhead of provisioning an external database. For teams running proof-of-concept AI projects or processing batches of documents on a schedule, it removes the friction of configuring persistent infrastructure. Vector stores work by converting text into numerical embeddings, then matching queries against those embeddings to find the most relevant results. The in-memory approach suits workflows where data is loaded fresh each run — think processing daily reports, analysing support tickets from the past 24 hours, or searching through a batch of uploaded PDFs. Because everything resets between executions, you avoid stale data issues that can crop up with persistent stores. Businesses exploring AI agent development often start with in-memory vector stores to validate their retrieval logic before committing to production-grade solutions like Pinecone or Qdrant. It is a practical first step that lets you prove the concept works with your actual data. Once validated, migrating to a persistent store is straightforward since the embedding and query patterns remain the same. If your team needs help designing vector search workflows or building RAG pipelines that scale, our AI consulting team can guide you from prototype to production deployment.
  • LangChain Code

    LangChain Code

    The LangChain Code node in n8n gives you the ability to write custom LangChain logic directly inside your automation workflows. Rather than being limited to pre-built nodes, you can drop into JavaScript or Python to orchestrate chains, agents, memory, and tool-calling patterns exactly how you need them. This is where n8n’s flexibility really shines for teams building AI-powered processes. LangChain is one of the most widely adopted frameworks for building applications on top of large language models. It provides abstractions for prompt templates, output parsers, document loaders, and multi-step reasoning chains. The Code node in n8n lets you tap into this entire ecosystem without leaving your workflow canvas, which means you can combine LangChain logic with hundreds of other integrations like databases, CRMs, and communication tools. Practically speaking, this node is valuable when the standard AI nodes do not cover your specific use case. Perhaps you need a custom output parser, a particular chain type that is not available as a built-in node, or fine-grained control over how context is passed between steps. Teams working on AI agent development frequently use it to implement custom tool-calling logic or complex reasoning chains that go beyond what drag-and-drop nodes can express. If you are building LangChain-based workflows and want expert guidance on architecture decisions, our custom AI development team works with businesses across Australia to design and deploy production-grade AI systems.
  • AI Agent

    AI Agent

    The AI Agent node in n8n is one of the platform’s most powerful components. It lets you build autonomous agents that can reason about tasks, decide which tools to use, and execute multi-step processes without manual intervention. Unlike simple prompt-and-response setups, an AI agent can call APIs, query databases, search the web, and chain together multiple actions to achieve a goal. At its core, the node connects a large language model to a set of tools you define. The model receives a task, evaluates which tools are needed, calls them in sequence, interprets the results, and determines whether the objective has been met or if further steps are required. This loop continues until the agent completes the task or reaches a configured limit. It is the same architecture behind popular agent frameworks, made accessible through n8n’s visual workflow builder. Businesses are using AI agents for tasks like processing incoming enquiries and routing them to the right team, extracting data from documents and populating systems automatically, and monitoring data sources to trigger actions based on specific conditions. Our talent marketplace case study demonstrates how an agent-based approach automated application screening that previously required hours of manual review. Building reliable agents requires careful design around tool definitions, error handling, and guardrails. If your team is exploring AI agent development, our consultants can help you design agents that are robust enough for production use.
  • Summarization Chain

    Summarization Chain

    The Summarization Chain node in n8n automates the process of condensing long documents, articles, or data feeds into concise summaries using a large language model. Instead of manually reading through lengthy content, you can feed it into this node and receive a focused summary that captures the key points. It is particularly valuable for teams dealing with high volumes of text-based information. Under the hood, the node implements LangChain’s summarisation strategies, which handle documents that exceed the language model’s context window. It can split long texts into chunks, summarise each chunk individually, and then combine those summaries into a final coherent output. This means you are not limited by token limits — the node manages that complexity for you. Practical applications span across industries. Financial teams use it to summarise daily market reports. Legal departments condense contract reviews. Customer support teams distil lengthy ticket histories into actionable overviews. Our work with an insurance technology company involved similar document processing challenges where automated summarisation saved significant manual effort. If your business processes large volumes of text and you want to explore how automated data processing can reduce manual workload, our AI consulting team can help you design summarisation workflows that integrate with your existing systems.
  • Structured Output Parser

    Structured Output Parser

    Structured Output Parser is an n8n node that takes raw text output from a language model and converts it into structured JSON data you can use in downstream workflow nodes. Large language models return free-form text by default, which is difficult to route, filter, or insert into databases. This node solves that by defining a schema — the fields and data types you expect — and parsing the model output to match that structure. This is essential for any workflow where AI-generated content needs to feed into other systems. If you ask an LLM to extract invoice details, categorise support tickets, or summarise documents, the Structured Output Parser ensures you get clean, typed fields like “amount”, “category”, or “summary” rather than unpredictable free text. It validates the output against your schema and handles formatting inconsistencies that language models frequently introduce. Osher Digital uses Structured Output Parser in nearly every AI agent workflow we build. In our patient data entry automation, parsing unstructured clinical notes into structured database fields was the core challenge. The same pattern applies to data processing pipelines and RPA workflows where AI output needs to slot into existing business systems. If your team is struggling to get reliable, structured data out of language models, our AI consultants know how to design schemas and prompts that produce consistent results.
  • OpenAI

    OpenAI

    OpenAI is an n8n node that connects your workflows to OpenAI models including GPT-4o, GPT-4, and GPT-3.5 Turbo. It lets you send prompts, receive completions, generate embeddings, and use function calling directly within your automation pipelines. Whether you are classifying incoming emails, generating content, extracting data from documents, or powering a conversational AI agent, this node provides the bridge between your workflow logic and OpenAI language models. The node supports chat completions with system and user messages, JSON mode for structured responses, function calling for tool-use workflows, and vision capabilities for image analysis. You can configure temperature, max tokens, and model selection per node, giving you granular control over how the AI behaves at each step of your workflow. For cost management, you can route simple tasks to GPT-3.5 Turbo and reserve GPT-4o for complex reasoning steps. At Osher Digital, OpenAI nodes are central to the AI agent systems and custom AI solutions we build for clients. From our medical document classification system to our talent marketplace application processing, OpenAI models power the intelligence layer while n8n handles orchestration. If you want to integrate AI into your business processes but are unsure where to start or which model fits your use case, our AI consulting team can guide you from proof of concept to production.
  • MultiQuery Retriever

    MultiQuery Retriever

    MultiQuery Retriever is an n8n node that improves the accuracy of vector search by generating multiple variations of a user query and retrieving results for each variation. Standard vector search with a single query often misses relevant documents because the query phrasing might not match how the information was written. MultiQuery Retriever addresses this by automatically rephrasing your original question from different angles, running each variation against your vector store, and merging the results into a single, more comprehensive set. This technique is especially valuable in retrieval-augmented generation (RAG) pipelines where retrieval quality directly determines the quality of your AI output. A user asking “How do I fix a slow database?” might miss documents about “query optimisation” or “index performance tuning” with a single query. MultiQuery Retriever generates those alternative phrasings automatically, significantly improving recall without manual prompt engineering for every possible question. Osher Digital implements MultiQuery Retriever in AI agent systems where high retrieval accuracy is critical. In knowledge base applications and internal search tools, the difference between finding the right document and returning nothing often comes down to query phrasing. We used this approach in our insurance tech project to improve how the system matched incoming data against reference documents. If your RAG pipeline is returning inconsistent or incomplete results, our AI consulting team can diagnose the retrieval bottleneck and implement fixes like MultiQuery Retriever.
  • Item List Output Parser

    Item List Output Parser

    Item List Output Parser is an n8n node that takes unstructured text from a language model and converts it into a clean array of individual items. When you ask an AI to generate a list — product names, action items, keywords, categories — the raw output comes as a block of text with inconsistent formatting. This node parses that text into a proper list structure where each item becomes a separate, usable data element in your workflow. The difference between getting a paragraph of comma-separated text and getting an actual array of items matters enormously for downstream automation. With a parsed list, you can loop through items, filter them, route them to different branches, or insert each one as a separate record in a database or spreadsheet. Without the parser, you would need custom code or multiple text manipulation steps to achieve the same result. At Osher Digital, we use Item List Output Parser in AI agent workflows where models need to generate sets of results — like extracting all action items from meeting notes, identifying relevant keywords for SEO analysis, or listing all products mentioned in a customer inquiry. It pairs well with our data processing pipelines and business automation solutions. In our patient data entry project, parsing lists of medications and conditions from clinical notes was a core requirement. If your workflows involve AI-generated lists that need to feed into other systems, our team can help you build reliable parsing pipelines.
  • Calculator

    Calculator

    Calculator is an n8n tool node that gives AI agents the ability to perform mathematical calculations as part of their reasoning process. Language models are notoriously unreliable at arithmetic — they can confidently return wrong answers to straightforward maths problems. The Calculator tool solves this by letting the agent offload any computation to a proper calculator engine, getting exact results every time instead of hallucinated numbers. This node is designed to work within n8n AI Agent workflows as an available tool the agent can call. When the agent encounters a question that requires maths — calculating totals, converting currencies, working out percentages, or comparing numerical values — it invokes the Calculator tool, passes the expression, and receives the precise result. The agent then incorporates that result into its response, combining natural language reasoning with mathematical accuracy. Osher Digital includes the Calculator tool as standard in most AI agent builds we deliver. Any agent handling financial data, inventory calculations, pricing queries, or reporting needs reliable maths. In our property inspection automation project, calculations around pricing, scheduling, and resource allocation were part of the agent workflow. For business automation and sales automation projects, accurate calculations are non-negotiable. If you are building AI agents that handle numbers, our consulting team ensures they get the maths right every time.
  • Google PaLM Chat Model

    Google PaLM Chat Model

    The Google PaLM Chat Model node connects your n8n workflows to Google’s PaLM (Pathways Language Model) family of large language models. It gives your automations access to Google’s AI capabilities for text generation, conversation, summarisation, and analysis — all configured through n8n’s visual interface without writing API client code. This node is particularly relevant for organisations already invested in the Google Cloud ecosystem. If your business runs on Google Workspace, uses BigQuery for analytics, or deploys services on Google Cloud Platform, the PaLM model slots neatly into your existing infrastructure and billing. You get a capable language model that integrates naturally with the rest of your Google stack. In practical automation terms, the PaLM Chat Model node works the same way as other language model nodes in n8n — you connect it to an AI agent, conversational chain, or any workflow component that needs natural language processing. Use it to power customer-facing chatbots, summarise meeting transcripts pulled from Google Calendar, generate email drafts based on CRM data, or classify incoming support requests. The node handles the API communication, token management, and response formatting so your workflow stays clean. For Australian businesses building AI agent systems or exploring business automation with language models, the PaLM node provides a solid alternative to OpenAI and Anthropic models. Having multiple model options means you can test which provider delivers the best results for your specific tasks, and you are not locked into a single vendor. Some tasks perform better on one model versus another, and the ability to swap models in n8n makes comparison straightforward.
  • Groq Chat Model

    Groq Chat Model

    The Groq Chat Model node connects your n8n workflows to Groq’s inference platform, which is built around their custom LPU (Language Processing Unit) hardware. The headline feature is speed — Groq delivers language model responses significantly faster than traditional GPU-based inference providers. If your automation requires near-instant AI responses, this node is worth serious consideration. Speed matters more than you might think in production workflows. When an AI agent needs to make multiple sequential LLM calls to reason through a problem — retrieving data, analysing it, deciding on next steps, and drafting a response — each call’s latency compounds. A model that responds in 200 milliseconds instead of 2 seconds means your multi-step agent completes in a few seconds rather than tens of seconds. For customer-facing applications, that difference directly affects user experience. Groq hosts popular open-source models including Llama, Mixtral, and Gemma variants. This means you get fast inference on capable models without needing to manage your own infrastructure. For AI consulting projects where clients need high-throughput AI processing — think real-time chat support, live data classification, or interactive AI agents — Groq offers a compelling price-to-performance ratio, especially for the speed-sensitive parts of a pipeline. The node works identically to other chat model nodes in n8n. Connect it to an AI agent, conversational chain, or any component that accepts a language model, and it functions as a drop-in replacement. This makes it easy to benchmark Groq against OpenAI, Anthropic, or Google models on your specific tasks. Many teams we work with at Osher use Groq for the fast, high-volume parts of their workflows and reserve more expensive models for tasks that demand maximum reasoning capability.
  • WhatsApp Trigger

    WhatsApp Trigger

    The WhatsApp Trigger node starts an n8n workflow whenever a message arrives on your WhatsApp Business account. It listens for incoming messages — text, images, documents, voice notes, locations — and feeds them directly into your automation pipeline. For businesses where WhatsApp is a primary communication channel (and in Australia, it increasingly is), this node turns a messaging app into a fully automated intake system. The practical applications are immediate. A customer sends a WhatsApp message asking about your services — your workflow receives it, an AI agent classifies the intent, looks up relevant information, and sends a personalised response, all within seconds. A field worker sends a photo of a completed job — your workflow extracts the metadata, logs it to your project management system, and notifies the office. A supplier sends a PDF invoice via WhatsApp — your workflow downloads it, extracts the data using AI, and creates the entry in your accounting software. For Australian businesses exploring sales automation and business automation, WhatsApp as an input channel is powerful because it meets customers where they already are. There is no app to download, no portal to log into. Customers just send a message the same way they message their friends, and your automation handles the rest. We have seen this pattern work particularly well for property services, healthcare bookings, and trade businesses where the customer base prefers quick messaging over formal enquiry forms. The node integrates with the WhatsApp Business API via the Meta Cloud API, so you get proper business messaging features: message templates, interactive buttons, list messages, and media handling. Combined with n8n’s AI capabilities and integration ecosystem, you can build sophisticated WhatsApp-based workflows that handle everything from initial enquiry to booking confirmation to post-service follow-up.
  • Character Text Splitter

    Character Text Splitter

    Character Text Splitter is an n8n node that breaks large text documents into smaller, manageable chunks based on character count. When you feed a massive PDF, webpage, or document into an AI model, it often exceeds token limits or produces poor results because the context window is too large. This node solves that by splitting text at logical breakpoints while respecting your specified chunk size and overlap settings. For teams building retrieval-augmented generation (RAG) pipelines or document processing workflows, chunking strategy directly affects output quality. Too large and your embeddings lose specificity. Too small and you lose context. Character Text Splitter gives you precise control over chunk size, overlap between chunks, and separator characters — letting you fine-tune how your documents get processed before they hit a vector database or language model. Osher Digital uses this node extensively in automated data processing pipelines and AI agent builds. In our medical document classification project, getting the chunk size right was critical to accurate categorisation of clinical records. If you are working with large-scale document ingestion and need help tuning your text splitting strategy, our AI consulting team can help you get it right the first time.
  • Qdrant Vector Store

    Qdrant Vector Store

    Qdrant Vector Store is an n8n node that connects your automation workflows to Qdrant, an open-source vector similarity search engine. It lets you insert, update, and query vector embeddings directly from n8n — which means you can build full retrieval-augmented generation (RAG) systems without writing custom API integration code. If your workflow involves semantic search, recommendation engines, or AI-powered document retrieval, this node handles the vector storage layer. What makes Qdrant stand out for self-hosted setups is its performance with filtered search and its straightforward deployment via Docker. You can run it alongside n8n on the same server, keeping your entire AI pipeline in-house. The n8n node supports inserting embeddings with metadata payloads, querying by vector similarity, and filtering results by payload fields — covering the core operations most RAG applications need. At Osher Digital, we use Qdrant in production for several AI agent projects where clients need fast, private vector search. Our insurance tech project relied on efficient vector retrieval for matching weather event data. If you are building a knowledge base, document search system, or AI assistant that needs to reference your own data, our AI consulting team can architect and deploy a Qdrant-backed solution tailored to your requirements.
  • Pinecone: Load

    Pinecone: Load

    Pinecone: Load is an n8n node that lets you insert vector embeddings into Pinecone, a fully managed vector database. When building AI applications that need to search through your own documents, product catalogues, or knowledge bases, you need somewhere to store the embeddings that represent your data. This node handles that storage step, taking vectors from your n8n workflow and loading them into a Pinecone index for later retrieval. Pinecone is popular because it removes the operational overhead of managing vector infrastructure. You do not need to worry about indexing, sharding, or scaling — Pinecone handles all of that. The n8n node supports batch upserts with metadata, namespace isolation, and configurable vector dimensions, making it straightforward to build production-grade RAG pipelines entirely within n8n. Our team at Osher Digital has implemented Pinecone-backed search systems for clients who prefer managed infrastructure over self-hosted options. For custom AI development projects involving document retrieval or semantic search, Pinecone paired with n8n provides a reliable foundation. We also use it in AI agent development where agents need to reference large knowledge bases. If you need help designing your vector storage architecture or choosing between managed and self-hosted options, get in touch with our team.
  • Remove Duplicates

    Remove Duplicates

    Remove Duplicates is an n8n node that filters out duplicate items from your workflow data based on field values you specify. When processing data from multiple sources — CRM exports, API responses, spreadsheet imports — duplicates inevitably creep in. This node catches them before they cause problems downstream, whether that means sending duplicate emails, creating duplicate records in your database, or double-processing invoices. The node works by comparing a field you choose (like email address, record ID, or transaction number) across all items passing through it. Only the first occurrence of each unique value continues through the workflow; subsequent duplicates get filtered out. You can also compare across multiple fields for more precise deduplication, such as matching on both name and date to catch records that share one field but differ on another. At Osher Digital, deduplication is a standard step in almost every data processing pipeline and system integration we build. In our talent marketplace project, removing duplicate applicant records was essential before feeding data into the AI processing stage. If your business is dealing with messy, duplicated data across multiple systems, our business automation team can design a clean data pipeline that eliminates duplicates and keeps your systems in sync.
  • Contextual Compression Retriever

    Contextual Compression Retriever

    The Contextual Compression Retriever node makes your AI retrieval workflows sharper by filtering and compressing retrieved documents before they reach your language model. Standard vector store retrieval often pulls back chunks that are mostly irrelevant — maybe only one sentence in a 500-word passage actually answers the question. This node strips away the noise, keeping only the parts that matter for the current query. For businesses building retrieval-augmented generation (RAG) systems in n8n, this is a practical upgrade. Instead of stuffing your LLM context window with loosely related text and hoping it figures out what is relevant, the Contextual Compression Retriever pre-processes the results. The language model receives focused, relevant excerpts, which means better answers, fewer hallucinations, and lower token costs per request. This matters most when you are working with large knowledge bases — internal documentation, product catalogues, compliance manuals, or client records. Australian businesses running AI agent systems for customer support or internal knowledge management see a direct improvement in answer quality when they add contextual compression to their retrieval pipeline. The difference is especially noticeable when questions are specific and the knowledge base is broad. The node works by wrapping an existing retriever (like a vector store retriever) and applying a compression step powered by a language model. You configure it once, connect it to your existing RAG chain, and every retrieval query automatically benefits from tighter, more relevant context. It is a relatively small change to your workflow that produces a measurable lift in output quality.
  • Read/Write Files from Disk

    Read/Write Files from Disk

    The Read/Write Files from Disk node gives your n8n workflows direct access to the server file system. It reads files into your workflow for processing and writes output files back to disk — handling everything from CSVs and JSON files to images, PDFs, and binary data. If your automation needs to pick up files from a directory, transform them, and save the results somewhere, this is the node that makes it happen. For businesses running self-hosted n8n instances, this node unlocks a whole category of automations that cloud-only platforms struggle with. Process invoices dropped into a shared folder, read configuration files that control workflow behaviour, generate reports and save them to a network drive, or create backup copies of important data. The node works with any file type and supports both reading single files and scanning entire directories. Australian companies managing automated data processing pipelines find this node essential for bridging the gap between file-based systems and API-driven workflows. Many legacy systems — accounting software, inventory management, government reporting tools — still rely on file exports and imports. This node lets n8n sit in the middle, reading those exports, transforming the data, and writing it back in the format the next system expects. The node is also valuable for AI development workflows where you need to read training data, save model outputs, or log results to disk for later analysis. Combined with n8n’s scheduling capabilities, you can build fully automated file processing pipelines that run on a timer without any manual intervention.
  • Redis Chat Memory

    Redis Chat Memory

    The Redis Chat Memory node gives your n8n AI workflows persistent conversation memory using Redis as the storage backend. When you build chatbots or AI agents, the language model has no built-in memory between requests — every interaction starts fresh. This node solves that by storing and retrieving conversation history from Redis, so your AI can reference what was said earlier in the conversation and respond with full context. Redis is a natural fit for chat memory because it is fast, lightweight, and designed for exactly this kind of ephemeral-but-important data. Conversation histories do not need the overhead of a relational database, but they do need low-latency read and write access. Redis delivers sub-millisecond response times, which means your AI agent can load a full conversation history without adding noticeable delay to the user experience. For Australian businesses deploying AI agents for customer support, internal helpdesks, or sales qualification, conversation memory is not optional — it is what separates a useful assistant from a frustrating one. Customers expect the AI to remember their name, their issue, and what has already been discussed. The Redis Chat Memory node makes this work reliably, even across multiple workflow executions and server restarts. The node supports session-based memory with configurable keys, so you can maintain separate conversation threads for different users, channels, or topics. Set a TTL (time to live) to automatically expire old conversations and keep your Redis instance lean. It integrates directly with n8n’s AI agent and chain nodes, requiring minimal configuration to add persistent memory to any conversational workflow.
  • Workflow Retriever

    Workflow Retriever

    The Workflow Retriever node lets your AI agents and chains pull information from other n8n workflows as if they were knowledge sources. Instead of connecting to a vector database or external API for retrieval, this node calls a separate n8n workflow that returns the relevant documents or data. It turns any workflow into a retrievable knowledge source for your RAG (retrieval-augmented generation) pipelines. This opens up retrieval patterns that are not possible with standard vector store approaches. Your retriever workflow can query a database, call an API, read from a spreadsheet, filter results based on business logic, or combine data from multiple sources — all before returning the results to your AI chain. The flexibility is significant: you are not limited to similarity search against embeddings. You can build any retrieval logic you want. For businesses running complex system integrations, this is a practical way to give AI agents access to live business data. An AI customer support agent could retrieve the latest order status from your ERP, current stock levels from your warehouse system, and relevant policy documents from your knowledge base — all through separate retriever workflows that each handle their own data source and transformation logic. The pattern also keeps your workflows modular and maintainable. Each retriever workflow is self-contained with its own error handling, credentials, and logic. When a data source changes its API or schema, you update one retriever workflow without touching your main AI agent workflow. Teams at Osher use this pattern extensively in n8n consulting projects where AI agents need access to multiple business systems simultaneously.
  • RSS Feed Trigger

    The RSS Feed Trigger node in n8n monitors RSS and Atom feeds and starts a workflow whenever new content appears. It polls the feed on a schedule you define — every five minutes, every hour, once a day — and fires the workflow for each new item it detects. This is the starting point for any automation that reacts to published content, whether that is news articles, blog posts, podcast episodes, or product updates. For sales and marketing teams, RSS Feed Trigger is a practical way to automate competitive monitoring, content curation, and lead intelligence. Set it to watch competitor blogs, industry news feeds, or job posting boards, then pipe new items through AI models to summarise, classify, or extract relevant details. The team gets a curated feed of what matters without manually checking dozens of sources every morning. The node also works well for operational workflows. Monitor government gazette feeds for regulatory changes, track vendor announcement pages for product updates, or watch job boards for new listings that match your criteria. We have built similar monitoring pipelines for clients using system integration workflows that connect RSS data to Slack alerts, CRM records, and reporting dashboards. If you need to monitor external content sources and turn them into actionable data inside your business systems, our n8n consulting team can design a monitoring pipeline that filters out noise and surfaces what actually matters to your team.
  • Cohere Model

    Cohere Model

    The Cohere Model node in n8n connects your workflows to Cohere’s language AI platform. Cohere specialises in enterprise-grade text understanding — classification, embeddings, reranking, and retrieval-augmented generation — built for production reliability rather than conversational novelty. The node handles API authentication and request formatting so you can plug Cohere models into your business automation workflows directly. Where Cohere stands apart from general-purpose chat models is its focus on search and retrieval quality. The Cohere Rerank model is particularly valuable in RAG pipelines — it takes a set of candidate documents retrieved from a vector store and reorders them by actual relevance to the query, dramatically improving the accuracy of AI-generated answers. If your retrieval pipeline returns ten documents but only three are truly relevant, Cohere Rerank surfaces those three at the top. For data processing workflows, Cohere’s classification and embedding models are strong choices. The classification endpoint lets you categorise text with just a few training examples, which is faster to set up than fine-tuning a model. The embedding models produce high-quality vector representations for semantic search, clustering, and deduplication tasks across large document sets. If you are building search, classification, or document processing workflows and want to evaluate whether Cohere’s specialised models could improve your results compared to general-purpose alternatives, our AI consulting team can run a comparison on your actual data.
  • Embeddings OpenAI

    Embeddings OpenAI

    The Embeddings OpenAI node converts text into numerical vector representations using OpenAI’s embedding models. These vectors capture the semantic meaning of your text, enabling similarity search, clustering, and retrieval-augmented generation (RAG) across your data. Every RAG workflow in n8n that uses OpenAI for embeddings runs through this node — it is the bridge between your raw text and your vector database. In practical terms, embeddings power the “search” side of any AI knowledge base or question-answering system. When you load documents into a vector store, the Embeddings OpenAI node converts each chunk of text into a vector. When a user asks a question, the same node converts that question into a vector. The vector store then finds the document chunks closest in meaning to the question, and those chunks get fed to an AI model as context for generating an answer. The current model, text-embedding-3-small, offers strong performance at low cost. For workflows processing thousands of documents, embedding costs are typically a fraction of the generation costs. We have used this pattern in data pipeline projects and document classification systems where the quality of embeddings directly affects how well the AI finds and uses relevant information. If you are building a knowledge base, document search, or RAG system and need the embedding layer set up properly, our custom AI development team can design the vector pipeline from document ingestion through to accurate retrieval.
  • Zep Vector Store

    Zep Vector Store

    The Zep Vector Store node in n8n connects your workflows to Zep’s purpose-built memory and vector storage platform. Zep handles both long-term document storage for RAG systems and conversation memory for AI agents, making it a two-in-one solution for workflows that need both capabilities. The node manages document insertion, vector search, and memory retrieval without requiring separate infrastructure for each function. What sets Zep apart from general-purpose vector databases is its focus on AI application needs. It automatically handles document chunking, embedding generation, and metadata indexing — tasks that typically require separate nodes in your n8n workflow. When you store a document in Zep, it processes and indexes the content on the server side, reducing the complexity of your workflow and the number of API calls to embedding providers like OpenAI. For teams building AI agents that need both knowledge base access and conversation memory, Zep simplifies the architecture considerably. Instead of connecting separate vector store, embedding, and memory nodes, you connect one Zep node that handles all three roles. We have found this approach particularly effective for internal knowledge bots and customer support agents where the agent needs to search company documents while maintaining conversation context. If you are building a RAG system or conversational AI agent and want to reduce infrastructure complexity, our n8n consulting team can help you evaluate whether Zep is the right fit for your workflow architecture.
  • Embeddings Azure OpenAI

    Embeddings Azure OpenAI

    The Embeddings Azure OpenAI node generates text embeddings through Microsoft’s Azure-hosted OpenAI service. It provides the same embedding models as OpenAI — text-embedding-3-small and text-embedding-3-large — but runs them within your Azure subscription where you control the region, networking, and access policies. For organisations that already use Azure or have enterprise agreements with Microsoft, this node keeps your AI workloads consolidated under one cloud provider. The primary reason businesses choose Azure OpenAI over the standard OpenAI API is control. Your data stays within your Azure tenant and the region you select. Network traffic can stay on private endpoints rather than traversing the public internet. Access is governed by Azure Active Directory rather than a simple API key. These characteristics matter for regulated industries and organisations with strict IT governance requirements. In n8n, the node works identically to the standard Embeddings OpenAI node — it converts text into vectors for storage in vector databases and powers the retrieval side of RAG pipelines. The difference is purely in where the model runs and how authentication works. Pair it with any vector store node in n8n to build knowledge bases, document search systems, and AI agent memory layers that comply with your organisation’s Azure policies. If your organisation runs on Azure and needs to build AI-powered search or document processing workflows within your existing cloud governance, our custom AI development team can architect a solution that meets your compliance requirements while delivering practical business results.
  • Anthropic Chat Model

    Anthropic Chat Model

    The Anthropic Chat Model node connects n8n workflows to Claude, Anthropic’s advanced AI assistant. If your business needs natural language understanding that goes beyond simple keyword matching, this node gives you direct access to one of the most capable large language models available. It slots into any n8n workflow where you need text generation, summarisation, classification, or conversational responses — without writing API integration code from scratch. For Australian businesses dealing with high volumes of customer enquiries, internal knowledge requests, or document review tasks, the Anthropic Chat Model node turns what used to be hours of manual work into something that runs in seconds. Connect it to a WhatsApp trigger or email input, and you have an AI-powered response system that actually understands context. Pair it with a vector store retriever and your internal documents, and staff can query company knowledge in plain English. Where this node really earns its place is in workflows that chain multiple AI steps together. Use it as the reasoning engine inside an AI agent that reads customer messages, looks up order history from your CRM, and drafts a personalised reply — all within a single n8n execution. Businesses we work with at Osher have used exactly this pattern to cut response times from hours to minutes while keeping quality consistent. The node supports Claude 3.5 Sonnet, Claude 3 Opus, and other Anthropic models, so you can balance cost against capability depending on the task. Straightforward credential setup means you can have it running in your workflow within minutes, not days.
  • Auto-fixing Output Parser

    Auto-fixing Output Parser

    The Auto-fixing Output Parser node solves one of the most common headaches in AI-powered workflows: getting structured data out of a language model that insists on returning messy, inconsistent responses. When you ask an LLM to return JSON or follow a specific schema, it often adds extra text, misses fields, or wraps the output in markdown code fences. This node catches those errors and automatically corrects them before your downstream nodes choke on bad data. In production n8n workflows, unreliable AI output is not just annoying — it breaks entire automations. A single malformed JSON response can halt a pipeline that processes customer orders, routes support tickets, or updates your CRM. The Auto-fixing Output Parser acts as a safety net, intercepting the raw model output and using a secondary LLM call to repair it against your defined schema. Your workflow keeps running even when the AI gets creative with formatting. This node is particularly valuable for Australian businesses running automated data processing pipelines where accuracy matters. Think invoice extraction, lead qualification, or medical form parsing — tasks where the output needs to conform to an exact structure every single time. Instead of building elaborate error handling logic, you let the parser handle the messiness so your team can focus on what happens with the clean data. Pair it with any chat model node (OpenAI, Anthropic, Groq) and a structured output parser, and you have a robust chain that delivers reliable structured data from free-text AI responses. It is one of those nodes that does not look exciting on paper but saves you hours of debugging in practice.
  • Chat Trigger

    The n8n Chat Trigger node lets you kick off automated workflows the moment a user sends a message through a chat interface. Rather than polling for updates or relying on manual checks, Chat Trigger listens for incoming messages in real time and fires your workflow instantly. It sits at the start of any conversational automation you build in n8n, acting as the entry point for everything from customer support bots to internal helpdesk assistants. For businesses handling high volumes of customer enquiries, Chat Trigger removes the bottleneck of manual triage. Incoming messages hit your workflow, get routed through logic branches, and reach the right team or AI agent without anyone copying and pasting between systems. We have seen this pattern work well for healthcare intake workflows and application processing pipelines where speed matters. Chat Trigger pairs naturally with AI model nodes like OpenAI or Google Gemini to build conversational agents that actually do things — look up orders, update CRM records, or escalate to a human when the question falls outside the model’s scope. The node supports both webhook-based and embedded chat widget setups, so you can connect it to your website, Slack workspace, or any platform that sends HTTP requests. If you need help designing a chat automation that handles real conversations without falling apart at edge cases, our AI agent development team can scope and build it with you.