AI & Automation

  • Microsoft Entra ID (Azure Active Directory)

    Microsoft Entra ID (Azure Active Directory)

    Microsoft Entra ID (formerly Azure Active Directory) is the identity and access management platform that underpins most enterprise Microsoft environments. The n8n Entra ID node lets you automate user provisioning, group management, licence assignments, and access control workflows — replacing manual admin tasks that eat into IT team capacity every single day. Managing identities at scale is one of those problems that grows quietly until it becomes unmanageable. New starters need accounts created across multiple systems. Leavers need access revoked promptly for security compliance. Group memberships change as people move between teams. Doing this manually is slow, error-prone, and a genuine security risk when offboarding gets delayed. With n8n and the Entra ID node, you can build workflows that trigger on HR system events — like a new hire record in BambooHR or a termination in your HRIS — and automatically create, update, or disable Entra ID accounts. Combine it with nodes for Slack, Google Workspace, or Jira to handle the full onboarding and offboarding chain in one automated sequence. Osher Digital helps Australian businesses automate identity management and system integrations using n8n. If your IT team is drowning in manual user provisioning or you need tighter offboarding controls for compliance, our business automation team can build it properly.
  • Recorded Future

    Recorded Future

    Recorded Future is a threat intelligence platform that aggregates and analyses data from across the open, deep, and dark web to identify cybersecurity threats before they reach your organisation. The n8n Recorded Future node lets you pull threat intelligence data directly into automated workflows — turning raw threat feeds into actionable alerts, enriched incident reports, and automated response actions. Security teams are drowning in data. Vulnerability disclosures, malware indicators, compromised credentials, and threat actor activity reports pour in from dozens of sources. Without automation, analysts spend most of their time collecting and correlating information rather than actually responding to threats. The Recorded Future node changes that dynamic by feeding curated intelligence straight into your workflow engine. In practice, this node powers workflows that enrich security alerts with threat context, automatically flag high-risk indicators of compromise (IOCs), cross-reference internal logs against known threat actors, and generate prioritised reports for your security operations centre. Combined with other n8n nodes for Slack, email, or ticketing systems, you can build end-to-end threat response pipelines. Osher Digital helps Australian businesses build automated data processing and security intelligence workflows using n8n. If your security team needs faster threat detection and response, our systems integration specialists can connect Recorded Future to your existing security stack.
  • Carbon Black

    Carbon Black

    Carbon Black is an endpoint security platform from Broadcom (formerly VMware) that provides threat detection, investigation, and response capabilities across your organisation’s devices. The n8n Carbon Black node connects this endpoint telemetry to your automated workflows — enabling security teams to respond to threats faster by removing the manual steps between detection and action. Endpoint security generates enormous volumes of data. Every process execution, network connection, file modification, and registry change on every managed device creates telemetry that needs monitoring. Security teams cannot manually review all of it, which is exactly why automated workflows matter. The Carbon Black node lets you pull alerts, query device status, isolate compromised endpoints, and retrieve investigation data without leaving your workflow engine. Practical use cases include automated alert enrichment (pulling Carbon Black alert details and combining them with other threat intelligence), endpoint isolation workflows that quarantine compromised machines the moment a critical alert fires, and compliance reporting that aggregates endpoint health data on a schedule. When seconds matter during an active incident, having these responses automated rather than manual can be the difference between containment and breach. Osher Digital builds security automation workflows for Australian businesses using n8n. If your team needs faster incident response or wants to automate routine endpoint security operations, our systems integration and business automation specialists can connect Carbon Black to your broader security operations.
  • Cisco Umbrella

    Cisco Umbrella

    Cisco Umbrella is a cloud-delivered security platform that provides DNS-layer protection, secure web gateway capabilities, and threat intelligence across your network. The n8n Cisco Umbrella node lets you automate DNS security operations — managing domain block lists, investigating DNS activity, and responding to threats without manual intervention in the Umbrella dashboard. DNS is the first step in almost every internet connection, which makes it an ideal enforcement point for security. Cisco Umbrella inspects DNS requests before they resolve, blocking connections to known malicious domains, phishing sites, and command-and-control servers. But managing policies, investigating blocked requests, and maintaining custom block lists manually takes time that security teams rarely have. With n8n, you can build workflows that automatically add newly discovered malicious domains to your Umbrella block lists, pull DNS activity reports on a schedule, investigate suspicious domains flagged by other security tools, and generate compliance reports showing blocked threat categories. It integrates cleanly with the rest of your security stack through n8n’s node ecosystem. Osher Digital helps Australian businesses automate their security operations with n8n. Whether you need DNS security policy management, automated data processing for security logs, or end-to-end system integrations across your security stack, our team builds workflows that keep your defences current without constant manual effort.
  • n8n Form Trigger

    n8n Form Trigger

    The n8n Form Trigger node creates web forms that feed directly into your automation workflows. Instead of using a separate form builder, connecting it to Zapier, and routing data through multiple tools, you build the form and the automation in one place. When someone submits the form, your workflow fires immediately with all the submitted data available for processing. This is particularly useful for internal operations — employee requests, client intake forms, feedback surveys, bug reports, and approval workflows. Rather than collecting form data in one tool and manually transferring it somewhere else, the Form Trigger routes submissions straight into whatever system needs them: a CRM, a project management tool, an email notification, or a database. For client-facing use cases, the Form Trigger handles lead capture, booking requests, and support ticket creation. You can validate inputs, send confirmation emails, create records in your CRM, and notify your team — all triggered automatically from a single form submission. Teams running business automation projects find this node eliminates an entire category of manual data entry work. We used a similar form-to-workflow approach when building an automated patient data entry system for a healthcare client, where incoming form submissions needed to be validated, classified, and routed to the right clinical team without manual intervention. If you need help designing form-driven workflows, our n8n consultants can architect a solution that fits your process.
  • TOTP

    The TOTP (Time-based One-Time Password) node in n8n generates and validates time-based authentication codes within your workflows. TOTP is the same technology behind authenticator apps like Google Authenticator and Authy — it produces a short-lived numeric code that changes every 30 seconds, tied to a shared secret key. This node lets you incorporate that security mechanism directly into your automation pipelines. The most common use case is automated authentication with services that require two-factor authentication (2FA). If your workflow needs to log into a system that demands a TOTP code, this node generates that code programmatically, eliminating the need for someone to manually open an authenticator app and type in numbers. This is essential for unattended automation that interacts with secured APIs or portals. For businesses concerned about system integration security, the TOTP node also enables you to build your own 2FA verification flows. You can generate TOTP secrets for users, validate codes they submit, and enforce time-based authentication as part of custom approval workflows or secure form submissions. Security and compliance are non-negotiable for industries like finance, healthcare, and legal services. If your automation workflows need to interact with secured systems or you want to add 2FA to your internal processes, our consulting team can help you design workflows that meet your compliance requirements without creating manual bottlenecks.
  • Embeddings AWS Bedrock

    Embeddings AWS Bedrock

    The Embeddings AWS Bedrock node in n8n generates vector embeddings using Amazon’s Bedrock service, which provides access to foundation models from providers like Amazon (Titan), Cohere, and others through a unified AWS API. For organisations already running infrastructure on AWS, this node keeps your AI workloads within the same cloud ecosystem — no need to send data to third-party embedding APIs outside your existing security perimeter. Data residency and security are the primary reasons teams choose AWS Bedrock for embeddings over standalone API providers. Bedrock runs within your AWS region, your data stays within your VPC boundaries, and access is controlled through IAM policies. For industries like finance, healthcare, and government where data handling rules are strict, this matters more than marginal differences in embedding quality between providers. From a technical standpoint, Bedrock embeddings work the same way as any other provider in n8n — you feed in text, get back a vector, and store it in your chosen vector database for semantic search or AI agent retrieval. The difference is operational: billing goes through your existing AWS account, access logs feed into CloudTrail, and the infrastructure scales automatically through AWS’s managed service layer. If your organisation is committed to AWS and needs to build custom AI solutions that comply with your security policies, our team can help you architect RAG pipelines and AI workflows that run entirely within your AWS environment — from embedding generation through to vector storage and inference.
  • OpenAI Assistant

    OpenAI Assistant

    The OpenAI Assistant node in n8n connects your workflows to OpenAI’s Assistants API, giving you access to persistent, stateful AI assistants that can use tools, retrieve files, and maintain conversation context across interactions. Unlike a simple ChatGPT API call that processes one message at a time, Assistants remember previous messages in a thread, can search through uploaded documents, and execute code — all managed by OpenAI’s infrastructure. This changes what you can build with n8n significantly. Instead of stitching together memory management, document retrieval, and tool calling manually across dozens of nodes, the OpenAI Assistant handles that complexity as a single managed service. You upload your documents, define the assistant’s instructions and tools, and the API handles context windows, retrieval ranking, and conversation threading automatically. For businesses building AI agents that need to reference company documents, answer questions from knowledge bases, or perform multi-step reasoning tasks, this node offers a streamlined path. Customer support agents that search policy documents, internal assistants that answer HR questions from handbooks, and research tools that analyse uploaded reports are all practical applications. Our team built a similar document-aware assistant when developing an AI medical document classification system that needed to process and reason about clinical documents accurately. If you are evaluating whether to build your own agent framework or use OpenAI’s managed Assistants API, our AI development team can help you assess the trade-offs for your specific use case — including cost, control, latency, and data privacy considerations.
  • Extract from File

    Extract from File

    Extract from File is an n8n utility node that pulls structured data out of documents — PDFs, spreadsheets, CSVs, and other file formats that would otherwise need manual handling. Instead of copying and pasting values from invoices, reports, or data exports, this node reads the file and returns clean, usable fields your workflow can act on immediately. Most businesses deal with incoming files daily. Supplier invoices arrive as PDFs, clients send spreadsheets, and internal teams export CSVs from various platforms. Without automation, someone has to open each file, find the relevant data, and re-enter it somewhere else. Extract from File eliminates that manual step entirely, feeding parsed data straight into downstream nodes for processing, storage, or analysis. When combined with other n8n nodes, Extract from File becomes the starting point for document-driven workflows. You might pull line items from purchase orders and push them into your accounting system, or extract contact details from uploaded forms and create CRM records automatically. The node handles the parsing so the rest of your workflow can focus on what to do with the data. If your team spends time manually pulling information from files, this is where that stops. Osher Digital helps Australian businesses build automated data processing pipelines that start with nodes like Extract from File and end with hours saved every week. Talk to us about business automation that actually fits your operations.
  • Zep Vector Store: Load

    Zep Vector Store: Load

    The Zep Vector Store: Load node in n8n retrieves stored vector embeddings from a Zep memory server, making them available for similarity search and retrieval-augmented generation (RAG) workflows. If you have already indexed documents, conversation histories, or knowledge bases into Zep, this node lets you query that data programmatically within your automation pipelines. Businesses building AI assistants or internal knowledge tools often hit the same wall: the language model knows nothing about your company. Zep solves this by storing your documents as vector embeddings that can be searched semantically. The Load node is the retrieval half of that equation — it pulls relevant context from Zep so your AI can answer questions grounded in your actual data rather than generic training knowledge. This node is especially valuable for teams running AI agent workflows that need long-term memory or access to large document collections. Pair it with an embeddings node and a language model to build RAG pipelines that retrieve the most relevant chunks of text before generating a response. The result is an AI that can reference your internal policies, product documentation, or client records accurately. Our custom AI development team has built RAG pipelines for clients across healthcare, insurance, and professional services — including a medical document classification system that needed precise retrieval from thousands of clinical documents.
  • Supabase Vector Store

    Supabase Vector Store

    The Supabase Vector Store node in n8n connects your workflows to Supabase’s pgvector extension, letting you store, retrieve, and search vector embeddings directly within a PostgreSQL database. For teams already using Supabase as their backend, this node removes the need for a separate vector database — your embeddings live alongside your application data in one place. Vector search is the backbone of retrieval-augmented generation (RAG) and semantic search applications. Instead of relying on exact keyword matches, you store text as mathematical representations (embeddings) and search by meaning. The Supabase Vector Store node handles both the indexing and retrieval sides, making it straightforward to build AI workflows that understand context rather than just matching strings. This is particularly useful for organisations building AI agents that need to reference internal knowledge bases, product catalogues, or support documentation. By embedding your content into Supabase and querying it through n8n, you can build assistants that pull the right information before generating a response. Our team used a similar approach when building an AI application processing system for a talent marketplace, where accurate document retrieval was essential. If you are evaluating vector database options and already run Supabase, this node keeps your architecture simple. Need help designing a RAG pipeline? Talk to our AI development team about building a solution that fits your existing stack.
  • Wolfram|Alpha

    Wolfram|Alpha

    The Wolfram|Alpha node in n8n brings computational knowledge into your automation workflows. Wolfram|Alpha is not a search engine — it computes answers from structured data across mathematics, science, geography, finance, and dozens of other domains. This node lets you query that computational engine programmatically, which is useful when your workflows need precise factual answers rather than generated text. Where AI language models sometimes hallucinate numbers or get calculations wrong, Wolfram|Alpha returns verified, computed results. Need to convert currencies at today’s rate? Calculate compound interest? Look up the population of a city or the molecular weight of a chemical compound? This node handles all of that reliably, making it a strong complement to AI-driven workflows where accuracy matters. For businesses building AI agents, Wolfram|Alpha fills a critical gap. Language models are good at reasoning and language, but they struggle with real-time data and precise computation. By adding this node to your agent’s tool chain, you give it the ability to answer quantitative questions accurately — financial calculations, unit conversions, statistical lookups, and more. Teams working in finance, engineering, logistics, and data processing find this node particularly valuable. If you need help integrating computational tools into your automation stack, our AI consulting team can design a workflow that combines the reasoning power of language models with the precision of Wolfram|Alpha.
  • Embeddings Google Gemini

    Embeddings Google Gemini

    The Embeddings Google Gemini node in n8n converts text into vector embeddings using Google’s Gemini embedding models. These embeddings are numerical representations of meaning — they capture what your text is about, not just the words it contains. This is the foundation for semantic search, retrieval-augmented generation (RAG), and any workflow where you need to compare or cluster text by meaning rather than keywords. Google Gemini’s embedding models are fast, cost-effective, and produce high-quality vectors that work well across a range of use cases. Whether you are indexing a knowledge base for an AI assistant, building a document similarity engine, or classifying incoming support tickets by topic, this node handles the text-to-vector conversion step within your n8n pipeline. For teams building AI agents or custom AI solutions, embeddings are a core building block. You generate embeddings for your documents once, store them in a vector database like Supabase, Pinecone, or Zep, and then query them at runtime to give your AI access to relevant context. The Gemini embedding models offer a strong balance of quality and speed, particularly for organisations already using Google Cloud services. If you are building a RAG pipeline or need help choosing the right embedding model for your use case, our AI consulting team can help you design an architecture that balances accuracy, speed, and cost.
  • Embeddings Mistral Cloud

    Embeddings Mistral Cloud

    The Embeddings Mistral Cloud node in n8n generates vector embeddings using Mistral AI’s cloud-hosted embedding models. Mistral has built a reputation for producing efficient, high-quality models, and their embedding offering is no exception — it delivers strong semantic representations at competitive pricing, making it an attractive option for teams that want quality embeddings without the cost overhead of larger providers. Embeddings are the building blocks of semantic search, document classification, and retrieval-augmented generation (RAG). When you convert text into a vector embedding, you capture its meaning in a format that machines can compare mathematically. This lets your workflows find relevant documents, cluster similar content, or match user queries to the right knowledge base articles — all based on meaning rather than exact keyword matches. For organisations building AI agents or internal search tools, Mistral embeddings offer a practical middle ground. They are fast enough for real-time applications, accurate enough for production RAG pipelines, and priced competitively for high-volume use. Teams running automated data processing workflows that need to classify or route documents by content find them particularly effective. Choosing the right embedding model affects the quality of everything downstream — search accuracy, agent reliability, and user experience. If you want guidance on which model fits your use case and budget, our consulting team can benchmark options against your actual data and recommend the best fit.
  • Embeddings Google PaLM

    Embeddings Google PaLM

    The Embeddings Google PaLM node in n8n generates vector embeddings using Google’s PaLM (Pathways Language Model) embedding models. While Google has since released newer Gemini models, PaLM embeddings remain available and are still used in production workflows that were built on the PaLM API. This node converts text into dense vector representations that capture semantic meaning, enabling similarity search, clustering, and retrieval-augmented generation within your n8n pipelines. If your organisation adopted Google’s PaLM API early and has existing vector collections built with PaLM embeddings, this node ensures compatibility. Switching embedding models mid-project means re-indexing your entire document collection, so there is real value in maintaining consistency with the model you originally used for indexing. The PaLM embeddings produce reliable vectors for most common use cases including document search, FAQ matching, and content recommendation. For new projects, you may want to evaluate the newer Gemini embedding models alongside PaLM, as Google continues to improve embedding quality and reduce costs with each generation. However, PaLM remains a solid choice for teams with established pipelines that are working well. Whether you are maintaining an existing PaLM-based system or evaluating your options for a new build, our custom AI development team can help you make the right architectural decisions. We have built RAG pipelines across multiple embedding providers and can advise on trade-offs between migration effort and performance gains.
  • In-Memory Vector Store

    In-Memory Vector Store

    In-Memory Vector Store is an n8n node that creates a temporary vector database directly in your workflow process memory. You feed it text, it generates embeddings, and you can immediately run semantic similarity searches against it — all without setting up Pinecone, Supabase, or any external database. The data lives only for the duration of the workflow execution and disappears when the run completes. This makes it perfect for prototyping RAG (retrieval-augmented generation) workflows, processing document batches where you need to cross-reference content within a single run, and testing embedding strategies before committing to a production vector database. A common pattern is loading a set of documents at the start of a workflow, searching them based on user queries or extracted criteria, and then discarding the vectors when the job is done. No infrastructure costs, no database management, no credentials to configure. The trade-off is clear: no persistence. When the workflow finishes, the vectors are gone. For production systems that need to retain and query data across executions, you would move to Pinecone or Supabase. But for rapid iteration, batch processing, and proof-of-concept builds, In-Memory Vector Store removes every barrier to getting started. Our AI consulting team regularly uses it during discovery workshops to demonstrate RAG capabilities to clients before designing the production architecture.
  • In Memory Vector Store Insert

    In Memory Vector Store Insert

    In Memory Vector Store Insert is the write-side companion to the In-Memory Vector Store node in n8n. While the vector store itself provides the search capability, this insert node handles the loading — taking your text data, passing it through an embedding model, and adding the resulting vectors to the in-memory store so they can be queried later in the same workflow execution. The typical workflow pattern is: trigger fires, documents are fetched from a source (files, APIs, database), the insert node embeds and stores them, and then a retriever node searches them based on a query. All of this happens in a single execution cycle with no external database involved. This makes it the fastest way to build a working RAG prototype or run a batch analysis job where you need to cross-reference a set of documents against each other or against specific criteria. Because the store is ephemeral — data disappears after the execution — this node is best suited for development, testing, and single-run batch processing. For production systems that need to retain vectors across runs, you would swap in Supabase: Load or a Pinecone insert node with minimal workflow changes. Our n8n consultants typically start client projects with in-memory inserts for fast iteration, then migrate to persistent storage once the retrieval logic is validated and the workflow is ready for production deployment.
  • Pinecone Vector Store

    Pinecone Vector Store

    Pinecone Vector Store is an n8n node that connects your workflows to Pinecone — a purpose-built, fully managed vector database designed for production-scale semantic search. Unlike in-memory stores that vanish after each run, Pinecone persists your vectors indefinitely, handles billions of embeddings, and delivers sub-second query responses. If you are building AI applications that need to search large, growing datasets reliably, Pinecone is the infrastructure-grade option. In n8n, this node handles both writing and reading. You can insert new vectors (documents, product listings, support articles), update existing ones, and run similarity searches — all from within your workflow. Pinecone manages the underlying infrastructure: indexing, replication, scaling, and backups. Your team focuses on what goes into the database and what to do with the results, not on managing servers or tuning database performance. Businesses with production RAG systems, large-scale recommendation engines, or customer-facing AI search typically land on Pinecone after outgrowing lighter options. Our AI agent development team has deployed Pinecone-backed systems for clients needing reliable retrieval across large knowledge bases — from insurance data pipelines to enterprise document search. If uptime, scale, and retrieval speed matter to your business, Pinecone is built for exactly that.
  • Twilio Trigger

    Twilio Trigger

    Twilio Trigger is an n8n node that starts a workflow whenever a Twilio event occurs — an incoming SMS, a phone call, a WhatsApp message, or a voice call status change. It turns your n8n instance into a real-time communication processor, reacting to customer messages the moment they arrive rather than polling for updates on a schedule. Businesses use this trigger to build responsive customer interactions without custom server code. When a customer texts your Twilio number, the trigger fires immediately and your workflow takes over — routing the message to the right team, sending an automated reply, logging the interaction in your CRM, or passing the message to an AI agent for intelligent response generation. The same applies to inbound calls, voicemail notifications, and WhatsApp conversations. For companies handling customer inquiries via SMS or WhatsApp, Twilio Trigger is the entry point for building AI-powered agents that respond in seconds rather than hours. Our team has built Twilio-triggered workflows for appointment confirmations, support ticket creation, lead qualification, and two-way conversational AI. Combined with business automation downstream, you can handle a significant volume of customer communications without adding headcount — each message gets processed, categorised, and routed automatically.
  • Manual Chat Trigger

    The Manual Chat Trigger node in n8n lets you build interactive chat interfaces that connect directly to your automation workflows. Rather than relying on third-party chatbot platforms with rigid templates, this node gives you full control over how conversations start, how messages get routed, and what happens next in your pipeline. It is particularly useful for businesses that need a lightweight, self-hosted chat entry point without the overhead of a full conversational AI platform. Customer support teams often struggle with disconnected tools — a chatbot here, a ticketing system there, and a CRM somewhere else. The Manual Chat Trigger bridges that gap by letting you capture chat inputs and feed them straight into workflows that create tickets, look up customer records, or route queries to the right department. You define the logic, not a vendor’s pre-built decision tree. For AI-powered use cases, this node pairs well with language model nodes like OpenAI or Claude to build custom AI agents that respond to user queries in real time. You can chain it with memory nodes, vector stores, and external APIs to create context-aware assistants tailored to your business domain. Teams running internal help desks or client-facing support portals find this approach far more flexible than off-the-shelf solutions. If you need help designing chat-driven workflows that actually solve problems, our n8n consulting team can architect a solution that fits your stack and scales with your team.
  • Vector Store Retriever

    Vector Store Retriever

    Vector Store Retriever is an n8n node that pulls relevant documents from a vector database based on semantic similarity. Rather than relying on exact keyword matches, it converts your query into a numerical embedding and finds the closest stored vectors — returning the most contextually relevant results. This matters for any business sitting on large volumes of unstructured data: internal knowledge bases, support ticket archives, product catalogues, or compliance documentation. In practice, teams use Vector Store Retriever as the backbone of retrieval-augmented generation (RAG) workflows. A customer support chatbot, for instance, can query your vector store to pull the three most relevant help articles before generating a response. The result is grounded, accurate answers instead of hallucinated guesswork. It pairs with vector databases like Pinecone, Supabase, and in-memory stores within n8n. If you are building AI agents or conversational interfaces that need to reference your own data, Vector Store Retriever is essential plumbing. Our AI agent development team has deployed RAG pipelines across industries — from medical document classification to insurance data retrieval. Whether you need a simple lookup or a multi-step reasoning chain, this node handles the retrieval layer so your language model can focus on generating useful output.
  • Ollama Model

    Ollama Model

    Ollama Model is an n8n node that connects your workflows to locally hosted large language models via Ollama. Instead of sending data to third-party APIs like OpenAI or Anthropic, you run the model on your own hardware — keeping sensitive information within your network. This matters for businesses handling confidential client data, medical records, financial documents, or anything that cannot leave your infrastructure for compliance or privacy reasons. The node works like any other language model connection in n8n. You point it at your Ollama instance, select a model (Llama, Mistral, Gemma, or dozens of other open-source options), and feed it prompts from your workflow. It supports chat completions, text generation, and structured output extraction. The trade-off versus cloud APIs is that you manage your own hardware and model performance, but you gain full control over data residency, latency, and per-query cost (which drops to near zero after the initial setup). Our AI consulting team works with businesses that need private, self-hosted AI but lack the infrastructure experience to set it up properly. From medical document classification to internal knowledge assistants, Ollama-powered workflows give you production-grade AI without sending a single byte to external servers. If you want the capabilities of a large language model with the data governance your industry demands, this node makes it practical inside n8n.
  • Zep

    Zep

    Zep is a long-term memory store for AI agents and chatbots, available as an n8n node. Unlike simple buffer memory that forgets everything when a session ends, Zep persists conversation history, extracts key facts, and lets your AI recall relevant context from days, weeks, or months ago. This transforms a basic chatbot into an assistant that genuinely remembers your users — their preferences, past issues, and ongoing projects. Under the hood, Zep stores messages and automatically generates summaries of older conversations. When your AI agent receives a new message, Zep retrieves the most relevant historical context using semantic search rather than dumping the entire chat log into the prompt. This keeps token usage manageable while giving the model access to important details from past interactions. It also supports user-level memory, so each customer or team member gets their own persistent context. For businesses building customer-facing AI or internal assistants that handle repeat interactions, Zep solves the biggest complaint users have with chatbots: “I already told you this.” Our AI agent development team has integrated Zep into support workflows, onboarding assistants, and account management bots. Combined with vector retrieval for knowledge base lookups, Zep-backed agents deliver the kind of personalised, context-aware experience that builds trust and reduces escalations to human staff.
  • SerpApi (Google Search)

    SerpApi (Google Search)

    SerpApi (Google Search) is an n8n node that lets your workflows query Google Search programmatically and receive structured results. Instead of scraping search pages — which is fragile, slow, and violates terms of service — SerpApi provides a clean API that returns organic results, featured snippets, knowledge panels, and related questions in a structured format your automation can immediately process. Businesses use this node to power a range of workflows: competitive monitoring that tracks where rivals rank for target keywords, lead enrichment that researches companies before outreach, content research that identifies trending topics and gaps, and AI agents that can search the web to answer questions with current information. The node slots into larger n8n workflows alongside data processing, AI, and notification nodes — so you can build end-to-end pipelines that search, analyse, and act without manual intervention. If you are building AI agents that need access to live web data, SerpApi is one of the most reliable ways to give them that capability. Our AI agent development team has used it in research assistants, market intelligence tools, and sales automation workflows that need real-time competitive data. Combined with automated data processing, search results can be filtered, enriched, and routed to the right people or systems without anyone manually Googling anything.
  • Postgres Trigger

    Postgres Trigger

    Postgres Trigger is an n8n node that starts workflows automatically when data changes in a PostgreSQL database. It detects inserts, updates, and deletes on specified tables, giving you real-time automation that responds to database events without polling or manual intervention. For businesses that run on PostgreSQL — and many do, from SaaS platforms to e-commerce systems — this trigger turns your database into an event source. When a new order is inserted, a customer record is updated, or a row is deleted, the trigger fires and your workflow takes over. No more scheduled jobs that check for changes every few minutes and miss the context of what actually happened. Our system integration team at Osher Digital uses the Postgres Trigger extensively for clients who need their business logic to react to data changes in real time. We built a workflow for an insurance technology client where database updates from their claims processing system triggered automated notifications and data pipeline refreshes — part of the work described in our BOM weather data pipeline case study. It’s a pattern we apply across industries wherever PostgreSQL sits at the centre of operations.
  • Embeddings Hugging Face Inference

    Embeddings Hugging Face Inference

    Embeddings Hugging Face Inference is an n8n AI node that converts text into numerical vector representations using models hosted on Hugging Face’s inference API. These embeddings capture the semantic meaning of your text, enabling similarity search, document clustering, and the retrieval component of RAG (Retrieval-Augmented Generation) systems. If you’re building any kind of AI-powered search or knowledge system in n8n, you need an embeddings node to turn your documents into vectors that a vector store can index. Hugging Face offers a wide range of embedding models — from lightweight options for simple use cases to specialised multilingual models — and this node gives you access to all of them without hosting your own infrastructure. Our AI agent development team at Osher Digital uses this node when clients want Hugging Face models for their embedding pipeline, often for cost or data sovereignty reasons. For projects where we need full control over the embedding model — like our medical document classification work — Hugging Face provides the flexibility to choose domain-specific models that outperform general-purpose alternatives. Our custom AI development team handles model selection and pipeline configuration.
  • Embeddings Ollama

    Embeddings Ollama

    Embeddings Ollama is an n8n AI node that generates text embeddings using models running locally through Ollama — an open-source tool for running large language models on your own hardware. This means your text data never leaves your infrastructure, making it the go-to choice for organisations with strict data privacy requirements or those who want to eliminate per-request API costs. The node works the same way as cloud-based embedding options: it converts text into numerical vectors for similarity search, document retrieval, and RAG systems. The difference is that everything runs on your own servers. For businesses processing sensitive data — healthcare records, legal documents, financial information — this local-first approach removes the compliance headache of sending data to third-party APIs. At Osher Digital, we recommend the Ollama embedding node for clients who need to keep data on-premises or who process enough volume that API costs become significant. We’ve deployed self-hosted embedding pipelines for healthcare clients where patient data privacy is non-negotiable, including work similar to our patient data entry automation project. Our AI agent development team handles the full setup from hardware sizing to model selection.
  • Azure OpenAI Chat Model

    Azure OpenAI Chat Model

    Azure OpenAI Chat Model is an n8n AI node that connects workflows to OpenAI’s GPT models hosted on Microsoft Azure. Instead of calling OpenAI’s API directly, you route requests through your own Azure subscription — giving you enterprise-grade security, data residency controls, and the ability to use private networking to keep AI traffic off the public internet. For organisations that already run on Azure or have compliance requirements around where their data is processed, this node is the practical way to use GPT models within n8n. You get the same model capabilities as the standard OpenAI node, but with Azure’s identity management, network isolation, and regional deployment options. Your data stays within your Azure tenant and doesn’t flow through OpenAI’s infrastructure. Our custom AI development team at Osher Digital recommends Azure OpenAI for clients in regulated industries — healthcare, financial services, and government — where data handling requirements rule out sending information to third-party APIs. We’ve used it for projects involving sensitive document processing, similar to our medical document classification work, where data sovereignty was a firm requirement. Our AI consulting team helps clients decide between Azure OpenAI, direct OpenAI, and self-hosted alternatives based on their specific compliance needs.
  • Google PaLM Language Model

    Google PaLM Language Model

    Google PaLM Language Model is an n8n AI node that connects workflows to Google’s PaLM (Pathways Language Model) family via the Google AI API. PaLM models offer strong performance on text generation, summarisation, question answering, and classification tasks — and for organisations already invested in Google Cloud, using PaLM keeps everything within the Google ecosystem. While OpenAI models dominate the conversation, PaLM is a solid alternative for specific use cases. It performs well on structured data tasks, multilingual content, and code generation. For businesses that prefer Google’s infrastructure for compliance, billing, or integration reasons, PaLM provides comparable capabilities without adding another vendor to the mix. At Osher Digital, we evaluate all major model providers when designing AI workflows for clients. Google PaLM is often the right choice when a client already runs on Google Cloud Platform, needs strong multilingual support, or wants to consolidate their AI spending under a single cloud vendor. Our custom AI development team configures the node with the appropriate model variant and parameters. For clients interested in comparing options, we run benchmarks across providers using their actual data to find the best fit — not just the most popular name.
  • Mistral Cloud Chat Model

    Mistral Cloud Chat Model

    Mistral Cloud Chat Model is an n8n AI node that connects workflows to Mistral AI’s hosted language models. Mistral has earned a reputation for producing compact, efficient models that punch above their weight — offering strong performance at lower cost and faster inference speeds than many competitors. The cloud chat model node gives you access to these models via Mistral’s API. For businesses building AI-powered workflows where speed and cost matter, Mistral is worth serious consideration. Models like Mistral Large and Mixtral handle complex reasoning tasks well, while smaller variants like Mistral Small deliver fast, affordable responses for simpler tasks like classification and routing. This flexibility means you can match the model to the job instead of paying for more capability than you need. Our AI agent development team at Osher Digital often uses Mistral models for specific nodes within larger workflows. For example, using Mistral Small for initial email classification (fast and cheap) before routing complex queries to a more capable model for detailed response generation. This multi-model approach keeps costs down without sacrificing quality where it counts. Our AI consulting team helps clients design these tiered architectures based on real performance benchmarks against their actual data.
  • OpenAI Model

    OpenAI Model

    OpenAI Model is an n8n AI node that connects your workflows to OpenAI’s language models — including GPT-4, GPT-4 Turbo, and GPT-3.5 Turbo. It’s the most widely used AI model node in n8n and the starting point for most organisations adding AI capabilities to their automation workflows. The node handles text generation, summarisation, classification, translation, code generation, and conversational AI. You send a prompt (with optional system instructions), and the model returns a response that downstream nodes can parse, route, and act on. Combined with n8n’s workflow builder, it turns manual text-heavy tasks into automated pipelines that run without human intervention. At Osher Digital, we use the OpenAI Model node across a wide range of client projects. It powers the AI components in our talent marketplace application processing system, handles document analysis in medical classification workflows, and drives content generation in marketing automation pipelines. Our AI agent development team has deep experience with prompt engineering, token optimisation, and building reliable production workflows around OpenAI’s API. We also help clients evaluate when OpenAI is the right choice versus alternatives like Mistral, Claude, or self-hosted models.
  • Limit

    Limit

    Limit is an n8n utility node that controls data flow by restricting how many items pass through a workflow at any given point. When you’re processing large datasets or pulling records from APIs, Limit lets you cap the output to a specific number of items — useful for testing, pagination, or preventing downstream systems from being overwhelmed. For businesses dealing with high-volume data pipelines, the Limit node is a small but important piece of the puzzle. It stops runaway processes, keeps API calls within rate thresholds, and gives you precise control over batch sizes. Whether you’re feeding data into a CRM, syncing records between platforms, or running nightly ETL jobs, Limit helps you manage throughput without writing custom logic. At Osher Digital, we use the Limit node regularly when building automated data processing workflows for clients. It’s particularly handy during initial testing — you can process just 10 records instead of 10,000 while you iron out the kinks. We’ve also used it in production workflows where APIs enforce strict rate limits, such as our BOM weather data pipeline project.
  • LoneScale Trigger

    LoneScale Trigger

    LoneScale Trigger is an n8n node that fires workflows automatically when specific events occur in LoneScale — a sales intelligence platform that tracks job changes, hiring signals, and buying intent across target accounts. When a prospect changes roles, a company starts hiring for relevant positions, or a key contact joins a new organisation, the LoneScale Trigger kicks off your automation. For sales and recruitment teams, this is a practical way to act on signals that would otherwise get buried in dashboards. Instead of manually checking LoneScale each morning, the trigger pushes relevant events directly into your CRM, Slack channels, or outreach sequences. The result is faster response times and fewer missed opportunities. Our sales automation team at Osher Digital has integrated LoneScale Trigger into workflows for clients in recruitment and B2B sales. One common setup routes job-change alerts into HubSpot as new tasks, so sales reps get notified the moment a past customer lands at a new company. Combined with our AI agent development work, we can even draft personalised outreach based on the trigger data, ready for human review before sending.
  • Ldap

    Ldap

    LDAP (Lightweight Directory Access Protocol) is an n8n node that connects workflows to directory services like Microsoft Active Directory and OpenLDAP. It lets you query, create, update, and manage user accounts, groups, and organisational units directly from your automated workflows — without touching the directory manually. For IT teams and organisations with complex user management requirements, the LDAP node solves a real pain point. Employee onboarding, offboarding, permission changes, and compliance audits all involve directory updates that are tedious and error-prone when done by hand. Automating these through n8n means faster provisioning and fewer mistakes. At Osher Digital, we’ve implemented LDAP integrations for clients who need to keep their directory services in sync with HR systems, ticketing platforms, and access management tools. A common pattern is connecting BambooHR or similar HR platforms to Active Directory via n8n — when a new employee record is created, the workflow automatically provisions their AD account, adds them to the right groups, and triggers downstream access provisioning. Our RPA team handles the full implementation.
  • Microsoft OneDrive Trigger

    Microsoft OneDrive Trigger

    Microsoft OneDrive Trigger is an n8n node that starts workflows automatically when files are created or modified in OneDrive. It watches specific folders for changes and fires your automation the moment something new lands — whether that’s an uploaded invoice, a signed contract, or an updated spreadsheet from a team member. For businesses running on Microsoft 365, this node eliminates the manual step of checking shared folders and acting on new files. Instead of someone downloading a report from OneDrive, renaming it, and forwarding it to the right team, the workflow handles it all. Documents get processed, data gets extracted, and notifications get sent without anyone lifting a finger. Our automated data processing team at Osher Digital uses the OneDrive Trigger frequently for document-heavy workflows. A practical example: a property inspection client uploads completed reports to a shared OneDrive folder, and the trigger kicks off a workflow that extracts key data, updates their project management system, and sends a summary to the operations manager. Similar to what we delivered in our property inspection automation project.
  • Question and Answer Chain

    Question and Answer Chain

    Question and Answer Chain is an n8n AI node that connects a language model to a knowledge base, enabling it to answer questions based on your own documents and data. Rather than relying on the model’s general training, the chain retrieves relevant context from your vector store or document collection and feeds it to the LLM alongside the user’s question — a pattern known as Retrieval-Augmented Generation (RAG). This is the foundation for building internal knowledge bots, customer support assistants, and document Q&A systems. Instead of employees searching through PDFs, wikis, or shared drives, they ask a question in plain language and get an accurate answer grounded in your actual content. The chain handles the retrieval, context assembly, and response generation in a single workflow step. At Osher Digital, we use the Question and Answer Chain node as a core building block for RAG-based AI agents. For one client, we built a medical document classification system that answers clinician queries against a library of clinical guidelines — detailed in our AI medical document classification case study. Our custom AI development team configures the chain with the right retrieval strategy, prompt templates, and model settings for each use case.