Dev Tools & APIs

  • Studio by AI21 Labs

    Studio by AI21 Labs

    AI21 Studio is a developer platform from AI21 Labs that provides access to large language models (LLMs) through APIs. It offers text generation, summarisation, paraphrasing, and custom language model fine-tuning, giving businesses the building blocks to add AI-powered text capabilities to their products and workflows. What makes AI21 Studio useful for businesses is the range of purpose-built models it offers beyond basic text generation. The Jurassic and Jamba model families handle tasks like text completion, summarisation, and contextual answers, while specialised endpoints handle paraphrasing, grammar correction, and text segmentation. This means you can pick the right model for each specific task rather than using one general-purpose model for everything. At Osher, we help businesses integrate AI21 Studio models into their operations through our custom AI development services. Whether you need automated report summarisation, intelligent document processing, or AI-powered content tools, we build the pipelines that connect AI21 models to your business systems using n8n and custom integrations. See our medical document classification case study for an example of how we deploy language models in production workflows. AI21 Studio supports model fine-tuning on your own data, which means you can train the models to understand your industry terminology and writing style. For Australian businesses looking to add language AI capabilities without building models from scratch, it provides a practical middle ground between off-the-shelf chatbots and custom model training.
  • Alchemy

    Alchemy

    Alchemy is a blockchain development platform that gives teams the APIs, SDKs, and infrastructure they need to build and scale decentralised applications without managing their own nodes. If your organisation is working with Web3 technology — whether that is an NFT marketplace, DeFi protocol, or on-chain data pipeline — Alchemy handles the heavy lifting so your developers can focus on product rather than plumbing. Where Alchemy fits into a broader automation strategy is in its event-driven architecture. Webhooks fire when on-chain conditions are met, which means you can trigger downstream workflows automatically — updating databases, notifying teams, or kicking off reconciliation processes. For businesses running automated data processing pipelines that pull from blockchain sources, Alchemy provides a reliable and fast data layer that replaces brittle public node connections. The platform supports Ethereum, Polygon, Arbitrum, Optimism, and several other networks, with enhanced APIs that outperform standard RPC endpoints on speed and reliability. For development teams already building on-chain, the debugging and monitoring tools cut troubleshooting time significantly. For organisations exploring blockchain integration, Alchemy removes the biggest barrier — infrastructure complexity — so you can validate use cases faster. At Osher, we have worked with clients who need to pipe blockchain data into existing business systems. Our system integration work often involves connecting APIs like Alchemy to internal tools, databases, and notification systems, turning raw on-chain events into actionable business data.
  • Daffy

    Daffy

    Daffy is a donor-advised fund (DAF) platform that makes charitable giving easier for individuals and businesses. It lets users set aside money for donations in a tax-advantaged account and then distribute grants to their chosen charities over time, all from a single app. For businesses, Daffy is useful as part of corporate giving programs, employee donation matching, and community engagement initiatives. The platform handles the tax receipts, grant processing, and charity verification, which removes a lot of the administrative overhead that usually comes with structured giving programs. At Osher, we help organisations integrate Daffy into their broader workflows using automation tools like n8n. This might mean syncing donation data with your accounting software, automating employee matching contributions, or building reporting dashboards that track giving across your organisation. Our system integration approach ensures Daffy fits into your existing tech stack rather than adding another disconnected tool. Daffy supports recurring contributions, portfolio-based investment of donated funds, and a clean interface for discovering and giving to charities. For organisations that want to make charitable giving part of their operations without the admin headache, it provides a practical solution backed by solid data processing capabilities.
  • Cloud Convert

    Cloud Convert

    CloudConvert is a file conversion API and web tool that supports over 200 file formats. It converts documents, images, videos, audio, spreadsheets, presentations, and ebooks between formats, making it a go-to tool for businesses that deal with files from multiple sources in different formats. The real power of CloudConvert is in its API. Rather than manually converting files one at a time through the web interface, you can build automated workflows that convert files as they arrive — turning uploaded PDFs into editable Word documents, compressing images for web use, or converting video files for different platforms. For businesses processing high volumes of files, this automation eliminates hours of repetitive work. At Osher, we integrate CloudConvert into automated document pipelines using n8n. Whether you need incoming client documents converted to a standard format, image files optimised before storage, or reports generated in multiple output formats, we build the workflows that handle file conversion without manual intervention. See our automated data processing services and our insurance tech data pipeline case study for real examples of how we handle document processing at scale. CloudConvert runs entirely in the cloud, supports batch processing, and offers file merging and watermarking features. For Australian businesses dealing with document-heavy processes, it is a practical tool that slots into larger automation workflows to keep files in the right format without anyone needing to think about it.
  • ApiFlash

    ApiFlash

    ApiFlash is a screenshot API that lets you capture full-page or viewport screenshots of any website programmatically. It runs on cloud-based Chrome instances, which means your screenshots render exactly as a real browser would display them — including JavaScript-heavy pages, dynamic content, and modern CSS layouts. For businesses, ApiFlash is useful for monitoring website changes, generating visual reports, creating automated previews of web content, and archiving webpage snapshots. Rather than manually taking screenshots, you can build workflows that capture them on a schedule or in response to specific events, then store or distribute them automatically. At Osher, we integrate ApiFlash into automated monitoring and reporting pipelines using n8n. This might mean capturing daily screenshots of competitor websites, generating visual previews for client reporting dashboards, or monitoring your own web properties for unexpected layout changes. Our automated data processing services handle these kinds of visual data workflows alongside traditional document processing. ApiFlash supports custom viewport sizes, full-page capture, CSS injection for styling overrides, geolocation targeting, and response format options including PNG, JPEG, and WebP. For Australian businesses that need reliable, automated website screenshots as part of their operations or reporting, it is a straightforward API that does one job well.
  • ChargeOver

    ChargeOver

    ChargeOver is a recurring billing and invoicing platform built for subscription-based businesses. It handles automated payment collection, dunning management, and flexible pricing models so your finance team can stop chasing invoices and focus on actual financial strategy. Where ChargeOver gets interesting for Australian businesses is its API-first design. Rather than treating it as a standalone billing tool, you can wire it into your existing accounting software, CRM, and payment gateways to create a fully automated revenue pipeline. That means subscription changes, failed payment retries, and invoice generation all happen without manual intervention. At Osher, we help businesses connect ChargeOver to their broader tech stack using workflow automation tools like n8n. Whether you need to sync billing data with Xero, trigger customer comms from subscription events, or build custom reporting dashboards, we design integrations that actually match how your team works. Check out how we approach system integrations or see a real-world example in our insurance tech data pipeline case study. ChargeOver supports fixed, usage-based, and tiered pricing models, along with a customer self-service portal where subscribers can manage their own accounts. For businesses scaling their recurring revenue, it removes the billing bottleneck that often slows growth.
  • Chatrace

    Chatrace

    Chatrace is a chatbot and messaging automation platform designed for customer support and marketing across channels like Facebook Messenger, Instagram, and web chat. It lets businesses build conversational flows without coding, handling everything from lead qualification to support ticket routing. The real value of Chatrace shows up when you connect it to the rest of your business systems. Instead of treating it as a standalone chatbot, you can feed customer interactions into your CRM, trigger follow-up workflows, and route complex queries to human agents — all automatically. This means your support team handles fewer repetitive questions while customers get faster responses. At Osher, we help businesses integrate Chatrace with their existing tools using workflow automation platforms like n8n. Whether you need chatbot conversations to update customer records, trigger email sequences, or escalate to a ticketing system, we build the connections that make your messaging channels work as part of a larger business automation strategy. See how we approach similar work in our talent marketplace case study. Chatrace supports multi-channel deployment, audience segmentation, and broadcast messaging. For businesses fielding high volumes of customer enquiries, it reduces response times and frees up staff for conversations that actually need a human touch.
  • Spondyr

    Spondyr

    Spondyr is a real-time content personalisation platform that allows businesses to dynamically update website content, emails, and digital assets based on external data sources — without requiring developer intervention for every change. It acts as a bridge between your data and your customer-facing content, enabling live updates triggered by events, schedules, or API calls. For Australian businesses that need content to reflect real-time conditions — pricing changes, inventory levels, event schedules, or location-specific offers — Spondyr removes the bottleneck of waiting for a developer to make manual updates. Marketing and operations teams can set rules that automatically adjust what customers see based on live data. The platform supports integration with APIs, webhooks, and data feeds, making it possible to pull information from your CRM, inventory system, or external data sources and render it directly in your content. This is particularly useful for businesses with high-frequency content changes or multiple product variants. Our custom AI development and data processing teams can help you connect Spondyr to your business systems, setting up dynamic content pipelines that keep your customer-facing materials accurate and up to date without manual intervention.
  • DarkSky API

    DarkSky API

    DarkSky API was one of the most respected weather data APIs available, known for its hyperlocal, minute-by-minute precipitation forecasting and clean developer experience. Apple acquired Dark Sky in 2020 and has since transitioned the API to Apple WeatherKit, which offers similar data through a new interface under Apple’s developer programme. For businesses that built applications, dashboards, or automation workflows on the DarkSky API, the transition to Apple WeatherKit means updating integrations, changing authentication methods, and adapting to a different data format. Some businesses have migrated to alternative weather APIs like OpenWeatherMap, Tomorrow.io, or Visual Crossing instead. Weather data is critical for more industries than people realise. Logistics companies route deliveries around storms. Agricultural businesses schedule irrigation based on precipitation forecasts. Event companies plan outdoor activities around weather windows. Insurance firms assess weather-related risk in real time. The accuracy and reliability of your weather data source directly impacts operational decisions. Osher Digital builds weather data integrations for Australian businesses — whether you are migrating from DarkSky to WeatherKit, switching to an alternative provider, or building new weather-driven data processing and AI agent workflows. We have built weather pipeline integrations for insurance technology companies and other industries that depend on reliable weather data.
  • Ably

    Ably

    Ably is a realtime infrastructure platform that handles the hard parts of building live, event-driven applications — pub/sub messaging, presence detection, push notifications, and data streaming at scale. When your application needs to push updates to thousands of connected users simultaneously, Ably provides the backbone that makes it work reliably. The use cases range from live chat and collaborative editing to IoT data streams, live sports scores, financial market data, and real-time dashboards. Any application where users need to see changes the instant they happen — without refreshing the page — is a candidate for Ably’s infrastructure. Building realtime features is deceptively complex. Connection management, message ordering, delivery guarantees, reconnection handling, and scaling across geographic regions all need to work flawlessly. Ably abstracts these challenges into a developer-friendly API, but integrating it properly with your application architecture, data sources, and backend systems still requires careful planning. Osher Digital integrates Ably into applications and AI agent systems for Australian businesses. Whether you are adding live features to an existing application, building a real-time dashboard, or connecting IoT devices to automated data processing pipelines, we handle the architecture and integration work.
  • Caspio

    Caspio

    Caspio is a low-code platform for building online database applications without writing traditional code. It lets businesses create searchable databases, data collection forms, reports, and interactive dashboards that can be embedded directly into existing websites or used as standalone applications. The platform is particularly useful for businesses that need custom data management tools but cannot justify the cost or timeline of full custom development. Think employee directories, inventory tracking systems, project management portals, or customer-facing search tools — the kind of internal applications that would normally require a developer team and months of work. Where Caspio gets complex is when you need it to talk to other systems. Pulling data from your CRM, pushing form submissions to your accounting software, or syncing records with an external API all require careful integration work. The platform has its own API and supports webhooks, but connecting it into a broader tech stack needs proper architecture. Osher Digital builds and integrates Caspio applications for businesses across Australia. We handle the custom development, system integrations, and data architecture so you get a polished application that works seamlessly with your existing tools.
  • Sifter

    Sifter

    Sifter is a straightforward bug and issue tracking tool designed for development teams that want to manage defects and feature requests without the overhead of heavyweight project management platforms. Sifter focuses on simplicity—you create issues, assign them, track their status, and close them when they are resolved. No Gantt charts, no sprint planning, no complexity you did not ask for. For small to mid-size development teams, agencies, and businesses managing software projects, Sifter fills the gap between a shared spreadsheet and enterprise tools like Jira. It is fast to set up, easy for non-technical stakeholders to use for reporting bugs, and does not require a project management certification to navigate. Sifter becomes more valuable when it connects to the rest of your development and business workflow. Integrated with communication tools, version control, and workflow automation, new issues can trigger notifications in Slack, link to code commits, update project dashboards, and escalate critical bugs to senior developers automatically. Our integration team builds these connections so your bug tracking feeds into—rather than sits apart from—your development process. If your team needs issue tracking that stays out of the way and lets developers focus on fixing things rather than managing a tool, Sifter integrated into an automated development workflow is a practical choice.
  • Snipcart

    Snipcart

    Snipcart is a developer-friendly e-commerce solution that adds a full shopping cart and checkout experience to any website without requiring a dedicated e-commerce platform. Snipcart works by embedding a JavaScript snippet into your existing site—whether it is built on a static site generator, headless CMS, or custom framework—and handling products, inventory, payments, shipping, and taxes through its overlay cart. This approach matters for businesses and agencies that want to sell products without rebuilding their website on Shopify or WooCommerce. If you have a well-performing site built on a modern stack—Next.js, Hugo, Webflow, or anything else—Snipcart lets you add commerce without changing your architecture. You keep your frontend, your performance, and your design flexibility. Snipcart becomes particularly powerful when connected to backend business systems through workflow automation. Orders can flow automatically into your fulfilment system, inventory updates sync across channels, customer data feeds into your CRM, and accounting records update in real time. Our integration team has built Snipcart-powered storefronts where the entire post-checkout process—from payment confirmation to shipping label generation—runs without anyone touching it. If you need e-commerce on a site that was not originally built for it, or you want to avoid the constraints of monolithic e-commerce platforms, Snipcart integrated into an automated backend gives you the selling capability without the platform lock-in.
  • Hybrid Analysis

    Hybrid Analysis

    Hybrid Analysis is an advanced malware analysis platform that combines static analysis, dynamic sandboxing and machine learning classification to determine whether files and URLs are malicious. Operated by CrowdStrike, the platform processes suspicious samples in isolated environments that simulate real operating systems, observing the actual behaviour of files — network connections, file system changes, registry modifications, process creation — to provide definitive verdicts that signature-based scanning alone cannot deliver. For Australian organisations dealing with targeted attacks, suspicious email attachments or files from untrusted sources, Hybrid Analysis provides the forensic depth needed to make confident security decisions. The platform generates detailed analysis reports including behavioural indicators, MITRE ATT&CK technique mapping, network indicators of compromise and risk scores that help security teams understand not just whether something is malicious, but what it does and how it operates. The Hybrid Analysis API enables programmatic submission and retrieval of analysis results, making it practical to integrate malware analysis into automated security workflows. Email security gateways can submit attachments for detonation before delivery, SOC playbooks can automatically analyse suspicious files extracted during incident response and threat intelligence teams can enrich indicators with behavioural analysis data. Our consulting team builds these automated analysis pipelines to ensure suspicious content gets evaluated systematically rather than relying on analyst availability. The platform supports analysis of executables, documents, scripts, archives and URLs across Windows, Linux and Android environments, providing broad coverage of the file types and platforms that Australian businesses encounter in their daily operations.
  • OpenCTI

    OpenCTI

    OpenCTI is an open-source threat intelligence platform designed to help organisations collect, store, analyse and share cyber threat intelligence in a structured, actionable format. Built on the STIX 2.1 standard and maintained by Filigran, the platform provides a knowledge graph approach to threat intelligence that maps relationships between threat actors, malware families, attack techniques, indicators of compromise and targeted sectors — giving security teams contextual understanding rather than disconnected indicator lists. For Australian organisations building threat intelligence capabilities, OpenCTI provides a practical foundation without the licensing costs of commercial platforms. The platform ingests intelligence from multiple sources including MISP feeds, TAXII servers, RSS feeds, CSV imports and direct API submissions, normalising everything into a consistent STIX format that enables meaningful correlation and analysis. This multi-source approach lets security teams combine commercial threat feeds, industry sharing groups and internal incident data into a unified intelligence picture. Where OpenCTI becomes particularly valuable is in operationalising threat intelligence — turning curated intelligence into defensive actions. Through its connectors and API, OpenCTI can push indicators to firewalls, SIEM correlation rules, endpoint detection platforms and automated blocking systems. Our consulting team helps organisations build these automated intelligence pipelines so threat data moves from analysis to protection without manual copy-paste operations that introduce delays and errors. The platform includes role-based access control, marking definitions for intelligence classification, and workflow capabilities for managing the intelligence lifecycle from ingestion through analysis to dissemination — essential features for organisations that share intelligence with partners, industry groups or government agencies like the Australian Cyber Security Centre (ACSC).
  • Fortinet FortiGate

    Fortinet FortiGate

    Fortinet FortiGate is a next-generation firewall platform that combines network security, VPN, intrusion prevention and application control into a single appliance. For Australian businesses managing complex network environments, FortiGate provides enterprise-grade protection without the operational complexity of managing multiple standalone security products. What makes FortiGate particularly relevant for growing organisations is its Security Fabric architecture — a framework that allows FortiGate firewalls to share threat intelligence and coordinate policy enforcement with other Fortinet products and third-party systems. This means your firewall is not operating in isolation; it is feeding and receiving intelligence from your entire security ecosystem, creating layered defences that respond to threats faster than individual point products. The FortiGate REST API and FortiManager centralised management platform open up significant automation possibilities. Firewall policy changes, VPN provisioning, threat log analysis and compliance reporting can all be driven programmatically — making FortiGate a practical foundation for automated security operations. Our consulting team helps organisations build these automated workflows, connecting FortiGate to broader IT operations platforms so security management scales with the business rather than requiring proportional headcount growth. FortiGate appliances are available in physical, virtual and cloud-native form factors, covering everything from small branch office deployments through to high-throughput data centre environments. The FortiGuard Labs threat intelligence service provides continuously updated signatures, URL filtering and application control databases that keep protection current against emerging threats.
  • Auth0 Management API

    Auth0 Management API

    Auth0 Management API provides programmatic control over your entire identity and access management infrastructure — user accounts, roles, permissions, connections and tenant configuration. For organisations building customer-facing applications or managing complex workforce identity requirements, the Management API turns Auth0 from a login widget into a fully automatable identity platform that integrates deeply with your business systems. The practical value for Australian businesses lies in automating identity lifecycle operations that would otherwise require manual dashboard work. User provisioning from HR systems, role assignments based on CRM data, automated account deprovisioning when staff leave, bulk migrations from legacy identity systems — these are all operations the Management API handles programmatically. When connected to workflow automation platforms like n8n, identity management becomes a seamless part of your broader business processes rather than an isolated administrative task. Auth0 supports a comprehensive range of identity protocols including OAuth 2.0, OpenID Connect and SAML, with pre-built connections for social providers, enterprise directories and custom databases. The Management API extends this with capabilities for custom branding, multi-factor authentication configuration, anomaly detection rules and detailed authentication analytics — giving development teams the building blocks for sophisticated identity experiences without building authentication infrastructure from scratch. For organisations subject to Australian Privacy Act requirements or industry-specific compliance obligations, Auth0 provides audit logging, consent management and data residency options that address common regulatory concerns. Our development team helps businesses architect Auth0 deployments that balance user experience, security requirements and compliance obligations from the outset.
  • F5 Big-IP

    F5 Big-IP

    F5 BIG-IP is an application delivery and security platform that handles load balancing, traffic management, SSL offloading, web application firewall (WAF) and API gateway functions for business-critical applications. For Australian organisations running high-availability web services, e-commerce platforms or customer-facing APIs, BIG-IP sits in front of your application infrastructure ensuring traffic reaches healthy servers, application attacks get blocked and performance remains consistent under variable load conditions. The platform operates through a modular licensing system — Local Traffic Manager (LTM) for load balancing, Advanced WAF for application security, Access Policy Manager (APM) for identity-aware access control and DNS for global traffic management. This modularity lets organisations deploy exactly the capabilities they need and expand as requirements grow, without replacing the underlying platform. For businesses managing application infrastructure across Australian data centres and cloud environments, BIG-IP provides a consistent traffic management layer regardless of where applications are hosted. Where BIG-IP delivers significant operational value is through its iControl REST API and declarative automation interfaces (AS3, DO, TS). Infrastructure-as-code approaches allow BIG-IP configuration to be version-controlled, tested and deployed through CI/CD pipelines — moving application delivery management from manual console operations to automated, repeatable processes. Our integration team helps organisations build these automated delivery pipelines, connecting BIG-IP to orchestration platforms, monitoring systems and incident management workflows. F5 also offers BIG-IP in virtual and cloud-native editions for AWS, Azure and Google Cloud, plus the newer distributed cloud services platform for organisations moving toward multi-cloud application delivery architectures.
  • VirusTotal

    VirusTotal

    VirusTotal is a threat intelligence service that analyses files, URLs, domains, and IP addresses for malicious content by scanning them against dozens of antivirus engines and security datasets simultaneously. The n8n VirusTotal node lets you automate these lookups within your workflows — turning manual “check this file” or “is this URL safe” tasks into instant, automated security checks. Security teams and IT departments deal with suspicious indicators constantly. Someone reports a phishing email, a monitoring tool flags an unusual domain, or a file download triggers an alert. The standard response is to manually copy the indicator into VirusTotal’s web interface, wait for results, and decide what to do. That process works for one-off checks but falls apart when you are handling dozens of alerts per day. The n8n VirusTotal node automates the entire lookup and response chain. You can build workflows that automatically scan email attachments against VirusTotal, check URLs extracted from support tickets, enrich SIEM alerts with multi-engine scan results, and quarantine files that exceed a detection threshold — all without manual intervention. The results feed directly into your next workflow step. Osher Digital builds security automation and automated data processing workflows for Australian businesses. If your team needs faster, more consistent threat checking across files, URLs, and indicators, our systems integration specialists can wire VirusTotal into your security stack using n8n.
  • Imperva WAF

    Imperva WAF

    Imperva WAF (Web Application Firewall) protects web applications, APIs and microservices from the full spectrum of application-layer attacks — from SQL injection and cross-site scripting through to sophisticated bot networks and DDoS campaigns. For Australian businesses running customer-facing web platforms, it provides a critical security layer that sits between your application and the internet, inspecting every request before it reaches your infrastructure. What sets Imperva apart from basic WAF solutions is its machine learning-driven threat detection engine. Rather than relying solely on static rule sets, the platform learns your application behaviour patterns and flags anomalies that signature-based systems miss. This is particularly relevant for organisations dealing with evolving attack vectors or running complex API ecosystems where traditional perimeter defences fall short. The real operational value emerges when Imperva WAF feeds into broader system integration workflows. Security events can trigger automated incident response — blocking suspicious IPs, alerting your SOC team, creating compliance audit trails and feeding threat data into your SIEM platform. Our consulting team helps organisations build these automated response pipelines so security events get handled in seconds rather than hours. Imperva offers both cloud-based and on-premises deployment options, with the cloud WAF providing rapid implementation for organisations that need protection quickly without significant infrastructure changes. The platform includes built-in compliance reporting for PCI-DSS, HIPAA and other frameworks relevant to Australian regulatory requirements.
  • Aggregate

    Aggregate

    Aggregate is an n8n utility node that collects multiple items from previous steps and combines them into a single output. When your workflow processes data in batches or loops — pulling records from an API, iterating through spreadsheet rows, or handling webhook payloads — Aggregate gathers all those individual items so downstream nodes can work with the complete dataset at once. This matters more than it sounds. Many workflow steps produce one item at a time, but the next step needs all of them together. You might need to compile a summary report from individual API responses, merge rows before writing to a database, or collect results from parallel branches into one payload. Without Aggregate, you end up with fragmented data that is difficult to process as a whole. Developers and data teams use Aggregate as the glue between collection and action. It pairs naturally with Split In Batches, Loop Over Items, and HTTP Request nodes that return paginated results. Once everything is collected, you can sort, filter, or transform the combined dataset before sending it wherever it needs to go. Osher Digital builds automated data processing workflows for Australian businesses using n8n. If you are working with data from multiple sources and need it consolidated reliably, our n8n consulting team can design a workflow that handles the heavy lifting.
  • Embeddings Cohere

    Embeddings Cohere

    Embeddings Cohere is an n8n node that generates vector embeddings from text using Cohere’s language models. Vector embeddings convert words, sentences, or documents into numerical representations that capture semantic meaning — making it possible for machines to understand similarity, relevance, and context in ways that keyword matching simply cannot. This node is essential for building AI-powered search, recommendation systems, and retrieval-augmented generation (RAG) pipelines. Instead of searching for exact keyword matches, embeddings let you find content that is conceptually related. A customer asking about “reducing operational costs” would match documents about “efficiency improvements” and “process optimisation” — even if those exact words never appear in the query. In practice, Embeddings Cohere slots into workflows where you need to index content for semantic search, classify text by topic, or feed context into a large language model. It pairs well with vector databases like Pinecone, Qdrant, or Weaviate, and works alongside n8n’s AI agent nodes for building intelligent retrieval systems. Osher Digital builds AI agent systems and custom AI solutions for Australian businesses using tools like Cohere embeddings. If you need semantic search, document retrieval, or an AI assistant that actually understands your data, get in touch with our AI consulting team.
  • n8n Form Trigger

    n8n Form Trigger

    The n8n Form Trigger node creates web forms that feed directly into your automation workflows. Instead of using a separate form builder, connecting it to Zapier, and routing data through multiple tools, you build the form and the automation in one place. When someone submits the form, your workflow fires immediately with all the submitted data available for processing. This is particularly useful for internal operations — employee requests, client intake forms, feedback surveys, bug reports, and approval workflows. Rather than collecting form data in one tool and manually transferring it somewhere else, the Form Trigger routes submissions straight into whatever system needs them: a CRM, a project management tool, an email notification, or a database. For client-facing use cases, the Form Trigger handles lead capture, booking requests, and support ticket creation. You can validate inputs, send confirmation emails, create records in your CRM, and notify your team — all triggered automatically from a single form submission. Teams running business automation projects find this node eliminates an entire category of manual data entry work. We used a similar form-to-workflow approach when building an automated patient data entry system for a healthcare client, where incoming form submissions needed to be validated, classified, and routed to the right clinical team without manual intervention. If you need help designing form-driven workflows, our n8n consultants can architect a solution that fits your process.
  • TOTP

    The TOTP (Time-based One-Time Password) node in n8n generates and validates time-based authentication codes within your workflows. TOTP is the same technology behind authenticator apps like Google Authenticator and Authy — it produces a short-lived numeric code that changes every 30 seconds, tied to a shared secret key. This node lets you incorporate that security mechanism directly into your automation pipelines. The most common use case is automated authentication with services that require two-factor authentication (2FA). If your workflow needs to log into a system that demands a TOTP code, this node generates that code programmatically, eliminating the need for someone to manually open an authenticator app and type in numbers. This is essential for unattended automation that interacts with secured APIs or portals. For businesses concerned about system integration security, the TOTP node also enables you to build your own 2FA verification flows. You can generate TOTP secrets for users, validate codes they submit, and enforce time-based authentication as part of custom approval workflows or secure form submissions. Security and compliance are non-negotiable for industries like finance, healthcare, and legal services. If your automation workflows need to interact with secured systems or you want to add 2FA to your internal processes, our consulting team can help you design workflows that meet your compliance requirements without creating manual bottlenecks.
  • Convert to File

    Convert to File

    The Convert to File node in n8n transforms structured data from your workflows into downloadable files. It takes JSON data, spreadsheet rows, text content, or other structured formats and converts them into files like CSV, XLSX, JSON, HTML, or plain text. This is the node you reach for when your workflow needs to produce a report, export data for another system, or generate a file attachment for an email. Reporting and data export are among the most common automation needs across every industry. Instead of manually exporting data from dashboards, copying rows into spreadsheets, and emailing files to stakeholders, the Convert to File node handles the entire output step programmatically. Your workflow collects the data, processes it, and produces a ready-to-use file — all without human intervention. This node is especially valuable in automated data processing pipelines where data needs to move between systems in specific file formats. Whether you are generating weekly sales reports from CRM data, exporting compliance records as CSV files, or creating invoices from order data, the Convert to File node handles the format conversion. We built a similar file generation pipeline for a property inspection company that needed automated report generation from field data. If your team spends time manually exporting and formatting data, our business automation team can build workflows that generate the files you need on schedule or on demand — no manual steps required.
  • Embeddings AWS Bedrock

    Embeddings AWS Bedrock

    The Embeddings AWS Bedrock node in n8n generates vector embeddings using Amazon’s Bedrock service, which provides access to foundation models from providers like Amazon (Titan), Cohere, and others through a unified AWS API. For organisations already running infrastructure on AWS, this node keeps your AI workloads within the same cloud ecosystem — no need to send data to third-party embedding APIs outside your existing security perimeter. Data residency and security are the primary reasons teams choose AWS Bedrock for embeddings over standalone API providers. Bedrock runs within your AWS region, your data stays within your VPC boundaries, and access is controlled through IAM policies. For industries like finance, healthcare, and government where data handling rules are strict, this matters more than marginal differences in embedding quality between providers. From a technical standpoint, Bedrock embeddings work the same way as any other provider in n8n — you feed in text, get back a vector, and store it in your chosen vector database for semantic search or AI agent retrieval. The difference is operational: billing goes through your existing AWS account, access logs feed into CloudTrail, and the infrastructure scales automatically through AWS’s managed service layer. If your organisation is committed to AWS and needs to build custom AI solutions that comply with your security policies, our team can help you architect RAG pipelines and AI workflows that run entirely within your AWS environment — from embedding generation through to vector storage and inference.
  • Zep Vector Store: Load

    Zep Vector Store: Load

    The Zep Vector Store: Load node in n8n retrieves stored vector embeddings from a Zep memory server, making them available for similarity search and retrieval-augmented generation (RAG) workflows. If you have already indexed documents, conversation histories, or knowledge bases into Zep, this node lets you query that data programmatically within your automation pipelines. Businesses building AI assistants or internal knowledge tools often hit the same wall: the language model knows nothing about your company. Zep solves this by storing your documents as vector embeddings that can be searched semantically. The Load node is the retrieval half of that equation — it pulls relevant context from Zep so your AI can answer questions grounded in your actual data rather than generic training knowledge. This node is especially valuable for teams running AI agent workflows that need long-term memory or access to large document collections. Pair it with an embeddings node and a language model to build RAG pipelines that retrieve the most relevant chunks of text before generating a response. The result is an AI that can reference your internal policies, product documentation, or client records accurately. Our custom AI development team has built RAG pipelines for clients across healthcare, insurance, and professional services — including a medical document classification system that needed precise retrieval from thousands of clinical documents.
  • Supabase Vector Store

    Supabase Vector Store

    The Supabase Vector Store node in n8n connects your workflows to Supabase’s pgvector extension, letting you store, retrieve, and search vector embeddings directly within a PostgreSQL database. For teams already using Supabase as their backend, this node removes the need for a separate vector database — your embeddings live alongside your application data in one place. Vector search is the backbone of retrieval-augmented generation (RAG) and semantic search applications. Instead of relying on exact keyword matches, you store text as mathematical representations (embeddings) and search by meaning. The Supabase Vector Store node handles both the indexing and retrieval sides, making it straightforward to build AI workflows that understand context rather than just matching strings. This is particularly useful for organisations building AI agents that need to reference internal knowledge bases, product catalogues, or support documentation. By embedding your content into Supabase and querying it through n8n, you can build assistants that pull the right information before generating a response. Our team used a similar approach when building an AI application processing system for a talent marketplace, where accurate document retrieval was essential. If you are evaluating vector database options and already run Supabase, this node keeps your architecture simple. Need help designing a RAG pipeline? Talk to our AI development team about building a solution that fits your existing stack.
  • Wolfram|Alpha

    Wolfram|Alpha

    The Wolfram|Alpha node in n8n brings computational knowledge into your automation workflows. Wolfram|Alpha is not a search engine — it computes answers from structured data across mathematics, science, geography, finance, and dozens of other domains. This node lets you query that computational engine programmatically, which is useful when your workflows need precise factual answers rather than generated text. Where AI language models sometimes hallucinate numbers or get calculations wrong, Wolfram|Alpha returns verified, computed results. Need to convert currencies at today’s rate? Calculate compound interest? Look up the population of a city or the molecular weight of a chemical compound? This node handles all of that reliably, making it a strong complement to AI-driven workflows where accuracy matters. For businesses building AI agents, Wolfram|Alpha fills a critical gap. Language models are good at reasoning and language, but they struggle with real-time data and precise computation. By adding this node to your agent’s tool chain, you give it the ability to answer quantitative questions accurately — financial calculations, unit conversions, statistical lookups, and more. Teams working in finance, engineering, logistics, and data processing find this node particularly valuable. If you need help integrating computational tools into your automation stack, our AI consulting team can design a workflow that combines the reasoning power of language models with the precision of Wolfram|Alpha.
  • Embeddings Google Gemini

    Embeddings Google Gemini

    The Embeddings Google Gemini node in n8n converts text into vector embeddings using Google’s Gemini embedding models. These embeddings are numerical representations of meaning — they capture what your text is about, not just the words it contains. This is the foundation for semantic search, retrieval-augmented generation (RAG), and any workflow where you need to compare or cluster text by meaning rather than keywords. Google Gemini’s embedding models are fast, cost-effective, and produce high-quality vectors that work well across a range of use cases. Whether you are indexing a knowledge base for an AI assistant, building a document similarity engine, or classifying incoming support tickets by topic, this node handles the text-to-vector conversion step within your n8n pipeline. For teams building AI agents or custom AI solutions, embeddings are a core building block. You generate embeddings for your documents once, store them in a vector database like Supabase, Pinecone, or Zep, and then query them at runtime to give your AI access to relevant context. The Gemini embedding models offer a strong balance of quality and speed, particularly for organisations already using Google Cloud services. If you are building a RAG pipeline or need help choosing the right embedding model for your use case, our AI consulting team can help you design an architecture that balances accuracy, speed, and cost.
  • Embeddings Mistral Cloud

    Embeddings Mistral Cloud

    The Embeddings Mistral Cloud node in n8n generates vector embeddings using Mistral AI’s cloud-hosted embedding models. Mistral has built a reputation for producing efficient, high-quality models, and their embedding offering is no exception — it delivers strong semantic representations at competitive pricing, making it an attractive option for teams that want quality embeddings without the cost overhead of larger providers. Embeddings are the building blocks of semantic search, document classification, and retrieval-augmented generation (RAG). When you convert text into a vector embedding, you capture its meaning in a format that machines can compare mathematically. This lets your workflows find relevant documents, cluster similar content, or match user queries to the right knowledge base articles — all based on meaning rather than exact keyword matches. For organisations building AI agents or internal search tools, Mistral embeddings offer a practical middle ground. They are fast enough for real-time applications, accurate enough for production RAG pipelines, and priced competitively for high-volume use. Teams running automated data processing workflows that need to classify or route documents by content find them particularly effective. Choosing the right embedding model affects the quality of everything downstream — search accuracy, agent reliability, and user experience. If you want guidance on which model fits your use case and budget, our consulting team can benchmark options against your actual data and recommend the best fit.
  • Embeddings Google PaLM

    Embeddings Google PaLM

    The Embeddings Google PaLM node in n8n generates vector embeddings using Google’s PaLM (Pathways Language Model) embedding models. While Google has since released newer Gemini models, PaLM embeddings remain available and are still used in production workflows that were built on the PaLM API. This node converts text into dense vector representations that capture semantic meaning, enabling similarity search, clustering, and retrieval-augmented generation within your n8n pipelines. If your organisation adopted Google’s PaLM API early and has existing vector collections built with PaLM embeddings, this node ensures compatibility. Switching embedding models mid-project means re-indexing your entire document collection, so there is real value in maintaining consistency with the model you originally used for indexing. The PaLM embeddings produce reliable vectors for most common use cases including document search, FAQ matching, and content recommendation. For new projects, you may want to evaluate the newer Gemini embedding models alongside PaLM, as Google continues to improve embedding quality and reduce costs with each generation. However, PaLM remains a solid choice for teams with established pipelines that are working well. Whether you are maintaining an existing PaLM-based system or evaluating your options for a new build, our custom AI development team can help you make the right architectural decisions. We have built RAG pipelines across multiple embedding providers and can advise on trade-offs between migration effort and performance gains.
  • In-Memory Vector Store

    In-Memory Vector Store

    In-Memory Vector Store is an n8n node that creates a temporary vector database directly in your workflow process memory. You feed it text, it generates embeddings, and you can immediately run semantic similarity searches against it — all without setting up Pinecone, Supabase, or any external database. The data lives only for the duration of the workflow execution and disappears when the run completes. This makes it perfect for prototyping RAG (retrieval-augmented generation) workflows, processing document batches where you need to cross-reference content within a single run, and testing embedding strategies before committing to a production vector database. A common pattern is loading a set of documents at the start of a workflow, searching them based on user queries or extracted criteria, and then discarding the vectors when the job is done. No infrastructure costs, no database management, no credentials to configure. The trade-off is clear: no persistence. When the workflow finishes, the vectors are gone. For production systems that need to retain and query data across executions, you would move to Pinecone or Supabase. But for rapid iteration, batch processing, and proof-of-concept builds, In-Memory Vector Store removes every barrier to getting started. Our AI consulting team regularly uses it during discovery workshops to demonstrate RAG capabilities to clients before designing the production architecture.
  • In Memory Vector Store Insert

    In Memory Vector Store Insert

    In Memory Vector Store Insert is the write-side companion to the In-Memory Vector Store node in n8n. While the vector store itself provides the search capability, this insert node handles the loading — taking your text data, passing it through an embedding model, and adding the resulting vectors to the in-memory store so they can be queried later in the same workflow execution. The typical workflow pattern is: trigger fires, documents are fetched from a source (files, APIs, database), the insert node embeds and stores them, and then a retriever node searches them based on a query. All of this happens in a single execution cycle with no external database involved. This makes it the fastest way to build a working RAG prototype or run a batch analysis job where you need to cross-reference a set of documents against each other or against specific criteria. Because the store is ephemeral — data disappears after the execution — this node is best suited for development, testing, and single-run batch processing. For production systems that need to retain vectors across runs, you would swap in Supabase: Load or a Pinecone insert node with minimal workflow changes. Our n8n consultants typically start client projects with in-memory inserts for fast iteration, then migrate to persistent storage once the retrieval logic is validated and the workflow is ready for production deployment.
  • Pinecone Vector Store

    Pinecone Vector Store

    Pinecone Vector Store is an n8n node that connects your workflows to Pinecone — a purpose-built, fully managed vector database designed for production-scale semantic search. Unlike in-memory stores that vanish after each run, Pinecone persists your vectors indefinitely, handles billions of embeddings, and delivers sub-second query responses. If you are building AI applications that need to search large, growing datasets reliably, Pinecone is the infrastructure-grade option. In n8n, this node handles both writing and reading. You can insert new vectors (documents, product listings, support articles), update existing ones, and run similarity searches — all from within your workflow. Pinecone manages the underlying infrastructure: indexing, replication, scaling, and backups. Your team focuses on what goes into the database and what to do with the results, not on managing servers or tuning database performance. Businesses with production RAG systems, large-scale recommendation engines, or customer-facing AI search typically land on Pinecone after outgrowing lighter options. Our AI agent development team has deployed Pinecone-backed systems for clients needing reliable retrieval across large knowledge bases — from insurance data pipelines to enterprise document search. If uptime, scale, and retrieval speed matter to your business, Pinecone is built for exactly that.
  • Npm

    Npm

    Npm is an n8n node that lets you install and use any npm (Node Package Manager) package directly inside your workflow. This unlocks the entire JavaScript ecosystem from within n8n — date formatting libraries, CSV parsers, encryption utilities, data validation tools, and thousands of other packages that solve specific problems without you writing everything from scratch. The practical value is straightforward: when n8n built-in nodes do not cover a specific data transformation, calculation, or formatting requirement, you reach for an npm package instead of building a custom integration. Need to generate PDFs? Parse complex XML? Validate Australian Business Numbers? Calculate business days excluding public holidays? There is almost certainly an npm package that handles it, and this node lets you use it without leaving your workflow. This node is designed for teams with some JavaScript comfort — you write a short code snippet that imports the package and processes your data. For businesses that want to extend n8n beyond its built-in capabilities without maintaining separate microservices, it bridges the gap between low-code automation and full developer flexibility. Our n8n consultants use it regularly to solve edge cases that standard nodes cannot handle, from custom data transformations in data processing pipelines to specialised integrations that connect niche business systems.