Dev Tools & APIs

  • Postgres

    Postgres

    The Postgres node in n8n connects your workflows directly to PostgreSQL databases, allowing you to run SELECT queries, INSERT rows, UPDATE records, DELETE data, and execute raw SQL statements. It authenticates using standard PostgreSQL credentials (host, port, database name, username, password) and supports SSL connections for production environments. PostgreSQL is one of the most widely used databases in production systems, and the ability to read from and write to it directly from n8n workflows is fundamental to most business automation projects. Instead of building custom middleware or writing scripts to move data in and out of your database, the Postgres node handles it natively within your workflow logic. At Osher, PostgreSQL integration is part of the majority of our system integration projects. We use the Postgres node to sync data between applications and a central database, generate reports by querying production data and formatting it for delivery via email or Slack, process webhook payloads by looking up related records before taking action, and maintain audit logs of automated workflow activity. We also use it in our AI agent builds, where agents need to query databases to retrieve context for generating responses. The node supports parameterised queries (protecting against SQL injection), batch operations for inserting multiple rows efficiently, and the ability to return specific columns or use aliases. For complex data pipeline projects, our data processing team can design and build PostgreSQL-backed automation workflows end to end.
  • Write Binary File

    The Write Binary File node in n8n saves binary data from your workflow to a file on the local filesystem of the machine running n8n. You specify the output file path and the node writes the binary data to disk. This is the counterpart to the Read Binary File node: one reads files in, the other writes files out. This node is useful whenever your workflow produces a file that needs to be stored locally: generated reports, processed images, converted documents, exported data files, or any output that downstream systems expect to find on disk. It is particularly valuable in self-hosted n8n environments where other applications or services on the same server need access to files produced by your workflows. At Osher, we use Write Binary File in data processing pipelines where the output needs to land in a specific directory. Common setups include writing generated CSV reports to a shared drive that a legacy system picks up for import, saving processed images to a directory served by a web server, and writing backup exports to a mounted volume. We also use it in combination with the Spreadsheet File node and Convert to/from Binary Data node to build complete file generation workflows. For writing files to cloud storage (Google Drive, S3, Azure Blob Storage), use the dedicated cloud storage nodes instead. Write Binary File is for local or mounted filesystem destinations. Our n8n consulting team can help you determine the right approach for your file output requirements.
  • MySQL

    MySQL

    The MySQL node in n8n connects your workflows directly to MySQL and MariaDB databases, letting you read, write, update, and delete records as part of automated processes. It turns n8n into a bridge between your database and every other system in your stack. Most business applications store their core data in a relational database. Customer records, orders, inventory, transactions. The MySQL node lets your automations query that data, write new records, update existing ones, and run custom SQL statements. This means your workflows can pull real-time data from production databases, sync records between systems, and trigger actions based on database changes. The node supports parameterised queries to prevent SQL injection, SSL/TLS connections for encrypted communication, and SSH tunnelling for databases behind firewalls. You can run SELECT queries to fetch data, INSERT statements to add records, UPDATE statements to modify data, and raw SQL for complex joins or aggregations. At Osher, MySQL integration is a core part of our data processing and system integration work. We connect MySQL databases to CRMs, reporting tools, API endpoints, and other databases as part of automated data pipelines. Our patient data entry automation project relied on MySQL for structured data storage and retrieval within the workflow. Our n8n team writes optimised queries and configures secure database connections for production use.
  • Execute Command

    The Execute Command node in n8n runs shell commands on the server where n8n is hosted. It gives your workflows access to the operating system layer, which means you can run any command-line tool, script, or system utility as a step in your automation. This node fills the gaps that API-based integrations cannot cover. Not every system has an API, but most have a CLI tool or can be scripted. Need to run a Python data processing script? Convert a file format using a command-line tool like ffmpeg or ImageMagick? Call a custom script that interacts with a legacy system? The Execute Command node makes it possible without building a custom n8n node. The node captures both stdout and stderr output and passes them to the next node as data, so downstream steps can process the results. You can run Bash, Python, Node.js, or any other language that is installed on your n8n server. Commands can include dynamic values from earlier workflow nodes, making them data-driven. At Osher, we use Execute Command in custom AI development projects where workflows need to call Python scripts for data processing, machine learning inference, or file manipulation. It is also a key component in our system integration work when connecting n8n to legacy systems that only expose CLI interfaces. Our n8n consulting team configures these nodes with proper security controls, since giving a workflow access to shell commands requires careful sandboxing.
  • Error Trigger

    The Error Trigger node in n8n fires automatically whenever another workflow encounters an unhandled error during execution. Instead of letting failures go unnoticed, Error Trigger captures the error message, the workflow that failed, and the execution ID, then passes that data into a dedicated error-handling workflow you define. This is essential for production automation. Without proper error handling, a broken API call or malformed data payload can silently stop your workflows, and nobody finds out until a client complains or data goes missing. Error Trigger solves that by routing failure details to wherever your team monitors issues: a Slack channel, an email alert, a logged row in a database, or a ticket in your project management tool. Common setups we build at Osher include Error Trigger workflows that post structured alerts to Slack with the failed workflow name, error message, and a direct link to the failed execution in n8n. For clients running mission-critical automations (invoice processing, patient data syncs, API integrations), we pair Error Trigger with retry logic and escalation paths so problems get resolved before they cause downstream impact. If you are running n8n in production and do not have Error Trigger workflows configured, you are flying blind. Our integration team can set up structured error handling across your entire n8n instance so failures are caught, logged, and acted on immediately.
  • Convert to/from binary data

    The Convert to/from Binary Data node in n8n transforms data between JSON format and binary format within your workflows. When n8n receives a file (a PDF, an image, a spreadsheet, a CSV), that file arrives as binary data. This node lets you convert it into a format your other workflow nodes can process, and convert processed data back into a file when you need to send or store it. This node is the bridge between file-based operations and data-based operations. For example, if a client emails a CSV attachment, the Convert to/from Binary Data node converts it from binary into JSON so you can filter, transform, and enrich the rows with other n8n nodes. When you are done processing, the same node converts your JSON output back into a CSV file for upload to a CRM, cloud storage, or email attachment. We use this node constantly in data processing automations at Osher. Typical builds include converting incoming PDF invoices to binary for OCR processing, transforming API response data into downloadable Excel reports, and converting images between formats during content processing pipelines. Without this node, moving files through multi-step workflows would require custom code at every stage. If your business processes involve files moving between systems (reports, invoices, images, documents), this node is fundamental to automating those movements. Talk to our integration team about building file processing workflows that eliminate manual handling.
  • Manual Trigger

    The Manual Trigger node in n8n lets you start a workflow by clicking a button in the n8n editor. Unlike webhook triggers or scheduled triggers that fire automatically, Manual Trigger requires someone to deliberately press “Execute Workflow” to run it. This makes it the go-to trigger for workflows that should only run on demand. Manual Trigger has two primary uses. First, it is essential during development and testing. When you are building a new workflow, you need a way to run it repeatedly while you test and debug each step. Manual Trigger lets you do that without setting up a webhook URL or waiting for a scheduled interval to elapse. Second, it powers workflows that genuinely need human initiation: one-off data migrations, on-demand report generation, manual approval processing, or ad-hoc data cleanup tasks. At Osher, we use Manual Trigger during the build phase of every automation project. Once a workflow is tested and ready for production, we typically replace Manual Trigger with the appropriate automated trigger (Webhook, Schedule, or an app-specific trigger). But for operational workflows that should only run when someone decides to run them, Manual Trigger stays in place permanently. If you are building n8n workflows and spending time setting up complex triggers just to test your logic, Manual Trigger is the simpler answer. Our n8n consulting team can help you structure your workflows with the right trigger for each use case.
  • Interval

    The Interval node in n8n triggers a workflow to run repeatedly at a fixed time interval you define. You set the interval in seconds, minutes, or hours, and n8n executes the workflow on that schedule for as long as the workflow is active. It is the simplest way to run recurring automations without cron syntax or external schedulers. This node is built for tasks that need to happen on a regular cycle: checking an inbox for new messages every five minutes, syncing records between two databases every hour, pulling fresh data from an API every fifteen minutes, or generating a summary report every morning. The Interval node handles the timing, and the rest of your workflow handles the logic. At Osher, we use the Interval node heavily in data processing workflows and system integration projects. A typical use case is polling an external API that does not support webhooks. If the source system cannot push data to you, the Interval node lets you pull data from it on a regular schedule. We also use it for periodic health checks, cache refreshes, and scheduled data cleanup tasks. One important consideration: the Interval node runs continuously while the workflow is active, so very short intervals on resource-intensive workflows can put load on your n8n instance. Our team can help you set appropriate intervals and build efficient workflows that run reliably at scale.
  • Loop Over Items (Split in Batches)

    The Loop Over Items node (also called Split in Batches) in n8n processes large datasets in smaller groups rather than all at once. It takes an array of items, splits them into batches of a configurable size, and processes each batch through a defined set of nodes before moving on to the next batch. This node solves a fundamental problem in workflow automation: most APIs, databases, and external services cannot handle thousands of records in a single request. They have rate limits, payload size restrictions, or timeout constraints. Without batch processing, a workflow that needs to update 5,000 CRM records would either hit API limits or time out entirely. The node works by splitting your data into groups (e.g., 50 items per batch), sending each group through downstream nodes, and then looping back for the next group until all items are processed. You can combine it with Wait nodes to add delays between batches, which is essential for respecting rate limits on services like Google, HubSpot, or Xero. At Osher, we use this node in virtually every data processing workflow that handles more than a handful of records. Our BOM weather data pipeline project relied on batch processing to handle large volumes of meteorological data without overwhelming downstream APIs. Our n8n consulting team sizes batches based on the specific API limits of each integration involved.
  • Respond to Webhook

    Respond to Webhook

    The Respond to Webhook node in n8n lets you send a custom HTTP response back to the system that triggered a webhook-based workflow. By default, n8n’s Webhook trigger node sends a simple acknowledgement when it receives a request. The Respond to Webhook node replaces that with a response you control: custom status codes, headers, and body content. This matters when you are building APIs or integrations where the calling system expects specific data back. For example, a form submission on your website might trigger an n8n workflow that validates the data, checks a CRM, and then returns a personalised confirmation message. Without this node, the caller just gets a generic 200 OK with no useful payload. The node supports JSON, text, and binary response types. You can set response headers for CORS, content type, or caching. You can also return different responses based on workflow logic, sending a success message if validation passes and an error message if it fails. At Osher, we use this node extensively in our system integration projects and when building AI agent endpoints. It turns n8n into a proper API backend that external applications can call and receive structured data from, rather than just a one-way automation tool.
  • Cron

    The Cron node (now called Schedule Trigger in newer n8n versions) triggers workflows on a time-based schedule. You configure it to fire at specific intervals (every 5 minutes, hourly, daily at 9am, weekly on Mondays, monthly on the 1st), and n8n automatically executes the workflow at those times. No manual triggering or external event is required. Scheduled execution is essential for automations that need to run on a recurring basis: pulling daily reports from an API, syncing data between systems every hour, sending weekly summary emails, running nightly data cleanup scripts, or checking for new records at regular intervals. The Cron node handles all of these timing patterns. The node uses standard cron expression syntax, giving you precise control over execution timing. You can set it to run at specific minutes, hours, days of the week, days of the month, and months. For simpler requirements, n8n also provides a visual interface where you select the interval without writing cron expressions directly. At Osher, the Cron node is the backbone of maintenance and reporting workflows we build for clients. Scheduled data syncs, automated report generation, and periodic health checks all rely on it. Our BOM weather data pipeline used scheduled triggers to pull fresh weather data at defined intervals. We use Cron-triggered workflows across data processing and RPA projects wherever tasks need to happen on a predictable schedule. If your business has recurring tasks that run on a timer, talk to our n8n team about automating them with scheduled workflows.
  • Function

    The Function node (called the Code node in newer n8n versions) lets you write custom JavaScript code inside an n8n workflow. When the built-in nodes can’t handle a specific data transformation, calculation, or logic requirement, the Function node gives you full programmatic control over how data is processed between steps. This node receives data from the previous node, runs your JavaScript code against it, and outputs the result to the next node. You can manipulate arrays, perform complex calculations, call utility functions, parse irregular data formats, generate dynamic values, and implement logic that goes beyond what the visual configuration nodes offer. Common use cases include: reformatting dates between different systems, parsing unstructured text (like extracting a tracking number from an email body), building complex JSON payloads for APIs that expect nested structures, aggregating or deduplicating data, and implementing business rules that involve multiple conditions and calculations that would be unwieldy as a chain of If nodes. At Osher, we use the Function node in projects where data transformation requirements exceed what the Edit Fields node can handle. Our talent marketplace project used Function nodes for parsing and scoring application data. It’s a regular part of our custom AI development and system integration work, especially when dealing with APIs that have unusual data formats or complex business logic requirements. If your automation needs custom data processing logic, talk to our n8n team about when and how to use the Function node effectively.
  • No Operation, do nothing

    The No Operation (NoOp) node in n8n does exactly what its name says: nothing. It receives data, passes it through unchanged, and does no processing. While that might sound pointless, it serves several practical purposes in workflow design that make complex automations easier to build, test, and maintain. The most common use for the NoOp node is as an endpoint for conditional branches that don’t require action. When an If node splits a workflow into two paths, sometimes one path needs processing and the other doesn’t. Instead of leaving the false branch disconnected (which can cause n8n to flag warnings), you connect it to a NoOp node to explicitly indicate “nothing should happen here.” This makes the workflow’s logic clear to anyone reviewing it. NoOp nodes also serve as merge points. When multiple branches of a workflow need to reconverge into a single path, connecting them through a NoOp node before the merge point can simplify the workflow structure. During development, NoOp nodes act as placeholders for nodes you plan to add later, keeping the workflow connected and executable while you build incrementally. At Osher, we use NoOp nodes in production workflows to keep branching logic clean and readable. When we build complex business automations with multiple conditional paths, NoOp nodes on the “do nothing” branches make the intent explicit. It’s a small thing, but it makes workflows significantly easier to troubleshoot and hand off to client teams. We apply this approach across our n8n consulting projects. If you’re building n8n workflows and want them structured for long-term maintainability, talk to our team about workflow design best practices.
  • Switch

    The Switch node in n8n routes data items to different output branches based on the value of a field. While the If node handles binary yes/no decisions, the Switch node handles multi-way routing: send items down one of several paths depending on which value a field contains. Think of it as a multi-lane roundabout instead of a T-junction. A typical use case: incoming support tickets have a “category” field that could be “billing,” “technical,” “general,” or “urgent.” The Switch node reads that field and sends each ticket to the branch that handles its category. Billing tickets go to the finance workflow, technical tickets get escalated to engineering, general enquiries go to the standard response queue, and urgent tickets trigger an immediate alert. The Switch node supports matching on exact values, regex patterns, and numeric ranges. You configure up to four named outputs (or more in recent n8n versions), each with its own matching rules. Items that don’t match any rule go to a fallback output, so nothing gets silently dropped. It processes each item independently, so a batch of records can split across multiple output branches in a single execution. At Osher, we use the Switch node whenever a workflow needs to route data to more than two destinations. Our talent marketplace application processing used Switch nodes to route candidates to different evaluation pipelines based on role type. It’s a standard component in our business automation and RPA projects wherever multi-path routing is required. If your automation needs to sort items into multiple categories or route data to different systems based on type, talk to our n8n team about using the Switch node.
  • Webhook

    Webhook

    The Webhook node turns an n8n workflow into an API endpoint. It creates a URL that, when called by an external system, triggers the workflow and passes the incoming request data (headers, query parameters, body) to the next node in the chain. This is how you build event-driven automations that respond to real-time events from other systems. Webhooks solve the polling problem. Instead of having your workflow check another system every few minutes for new data (which wastes resources and introduces delays), the external system notifies your workflow the moment something happens. A CRM sends a webhook when a deal closes. A payment processor sends a webhook when a charge succeeds. A form tool sends a webhook when someone submits a response. Your n8n workflow starts processing immediately. The Webhook node supports GET and POST requests, can accept JSON, form data, and binary payloads, handles custom headers for authentication verification, and can return custom responses to the calling system. You can also configure it for testing (temporary URL) or production (persistent URL that survives workflow restarts). At Osher, webhook-triggered workflows are our most common build pattern. Our property inspection automation uses webhooks to trigger report generation when field inspectors submit data. We build webhook-based system integrations and business automations for clients who need their systems to react to events in real time rather than on a schedule. If you need your automation to respond instantly when something happens in another system, talk to our n8n team about building webhook-triggered workflows.
  • Eden AI integrations

    Eden AI integrations

    Eden AI is an API aggregation platform that gives developers access to multiple AI providers through a single, standardised interface. Instead of building separate integrations for OpenAI, Google Cloud Vision, AWS Textract, and other AI services, you connect to Eden AI once and switch between providers without changing your code. This matters for businesses that need AI capabilities like text analysis, image recognition, document parsing, or translation but don’t want to be locked into a single provider. Eden AI lets you compare results across providers, pick the best-performing one for your use case, and switch if pricing or quality changes. It covers categories including OCR, sentiment analysis, speech-to-text, object detection, and language translation. At Osher, we use Eden AI within n8n workflows when a client’s automation needs AI processing but the specific provider isn’t critical. For example, a document classification workflow might use Eden AI’s OCR endpoint to extract text, then route the output through a language model for categorisation. Our medical document classification project shows how we approach AI-powered document processing for clients. We also build custom AI solutions where Eden AI’s multi-provider approach reduces vendor lock-in risk. If you’re evaluating AI services and want to test multiple providers without building separate integrations for each one, talk to our team about how Eden AI fits into your automation stack.
  • IPInfo integrations

    IPInfo integrations

    IPinfo is an IP address intelligence service that provides geolocation, company, carrier, privacy detection, and ASN (Autonomous System Number) data for any IP address. Businesses use it to understand where their website visitors or API consumers are located, detect VPN and proxy usage, identify the organisations behind IP traffic, and enforce location-based compliance rules. The common problem: you have IP addresses in your logs, analytics, or application data, but no way to turn those addresses into actionable information. IPinfo’s API takes an IP address and returns structured data including city, region, country, company name, whether it’s a VPN or hosting provider, and the network it belongs to. This data feeds into fraud detection, content personalisation, compliance checks, and traffic analysis. At Osher, we integrate IPinfo into n8n workflows that need IP-based decision-making. For example, a lead qualification workflow might check whether a form submission came from a corporate IP (genuine lead) or a VPN exit node (potentially low quality). A compliance workflow might verify that user access originates from approved geographic regions. We build these kinds of automated data processing pipelines and system integrations for clients across industries including insurance, finance, and SaaS. If you need to enrich IP address data in your workflows or add geographic intelligence to your automation, talk to our team about integrating IPinfo.
  • HTTP Request

    HTTP Request

    The HTTP Request node is one of the most used nodes in n8n. It lets a workflow send HTTP requests (GET, POST, PUT, PATCH, DELETE) to any web API or URL, making it the universal connector for services that don’t have a dedicated n8n integration node. If a system has a REST API, the HTTP Request node can talk to it. This node handles authentication (API keys, OAuth2, bearer tokens, basic auth), custom headers, query parameters, request bodies (JSON, form data, binary), pagination, response parsing, and error handling. It can upload and download files, follow redirects, and process responses as JSON, text, or binary data. At Osher, the HTTP Request node appears in nearly every n8n workflow we build. It connects to client-specific APIs, third-party SaaS platforms, internal microservices, and AI model endpoints. We used it extensively in our BOM weather data pipeline project, where the workflow pulled weather data from external APIs and pushed processed results into the client’s systems. It’s also central to every system integration project where the target system doesn’t have a native n8n node. If your automation project needs to connect to an API that n8n doesn’t have a built-in node for, the HTTP Request node fills that gap. Talk to our n8n team about building API integrations into your workflows.
  • BugShot

    BugShot

    BugShot is a visual bug reporting tool that lets testers and team members capture screenshots, annotate them with markers and notes, and submit them directly to issue trackers without leaving the application they are testing. It provides browser extensions and integrations that streamline the path from “I found a problem” to “there is a ticket in Jira with all the details.” The problem BugShot addresses is the friction in bug reporting. Testers find an issue, take a screenshot, open their issue tracker, create a new ticket, upload the screenshot, type out the steps to reproduce, and add environment details. BugShot compresses this into a single flow: capture, annotate, and submit. Browser and device information is attached automatically. Osher includes tools like BugShot in development projects where QA efficiency directly affects delivery timelines. By reducing the time each bug report takes to file, testers can cover more ground during testing cycles. We connect BugShot’s output to your project management tools through our integration services, making sure reports land in the right project board with the right context attached.
  • Adobe integrations

    Adobe integrations

    Adobe integrations connect Adobe’s creative and document tools (Photoshop, Illustrator, InDesign, Acrobat, Premiere Pro, After Effects, and others) with external business systems through APIs, webhooks, and workflow automation. Adobe provides several integration surfaces: Creative Cloud Libraries for shared assets, Adobe I/O APIs for programmatic access to individual products, Adobe Experience Platform for marketing data, and Adobe PDF Services for document generation and manipulation. The problem Adobe integrations solve is creative workflow fragmentation. Design teams produce assets in Adobe tools, but those assets need to flow into CMS platforms, DAM systems, marketing automation, print production, and e-commerce catalogues. Without integration, this transfer is manual: export, rename, upload, tag, publish. Adobe’s APIs make it possible to automate these steps, from asset creation through to final delivery. Osher connects Adobe’s APIs and export workflows into broader business automation systems. Common projects include automating document generation with Adobe PDF Services (generating contracts, reports, or invoices from templates), syncing creative assets from Adobe Libraries to web CMS or DAM platforms, and building automated data processing pipelines that extract text or metadata from PDF documents. Our team handles the API integration so your creative and operations teams can focus on their actual work.
  • Apiary integrations

    Apiary integrations

    Apiary (now part of Oracle) is an API design and documentation platform that lets teams write API specifications in API Blueprint or Swagger/OpenAPI format, generate interactive documentation from those specs, and run a mock server that returns sample responses before any backend code is written. It provides a collaborative environment where developers, product managers, and API consumers can design, review, and test APIs before committing to implementation. The problem Apiary solves is the “build first, document later” pattern that leads to poorly designed APIs. When teams jump straight into coding, API design decisions get made ad hoc and documentation becomes an afterthought. Apiary flips this by making the specification the starting point. You write the API contract, generate a mock server automatically, let consumers test against it, and only then build the real backend. This catches design problems early when they are cheap to fix. Osher uses API-first design principles in our custom development projects and recommends tools like Apiary when clients need clean API contracts for system integrations. Having a well-documented API spec before development starts reduces integration friction, speeds up parallel development (frontend and backend teams work simultaneously against the mock), and produces better documentation as a natural byproduct of the design process.
  • Starton

    Starton

    Starton is a blockchain development platform that provides APIs and infrastructure for deploying smart contracts, managing wallets, monitoring on-chain events, and interacting with multiple blockchain networks (Ethereum, Polygon, BNB Chain, Avalanche, and others). It gives developers a unified REST API layer over blockchain operations that would otherwise require direct node interaction and deep protocol knowledge. The problem Starton solves is blockchain infrastructure management. Running your own nodes, writing deployment scripts, monitoring transactions, and handling wallet security is a full engineering discipline. Starton abstracts this into API calls: deploy a smart contract with a POST request, monitor wallet balances with a GET request, and receive webhooks when on-chain events occur. This makes blockchain functionality accessible to backend developers who do not specialise in Web3. Osher uses Starton as part of custom development projects where clients need blockchain features integrated into existing applications. Rather than building blockchain infrastructure from scratch, we connect Starton’s API to your backend systems through our integration services. This is practical for use cases like token-gated access, on-chain data verification, or automated smart contract interactions triggered by business events.
  • Instabug

    Instabug

    Instabug is a mobile app monitoring platform that provides bug reporting, crash analytics, performance monitoring, and in-app user feedback collection for iOS and Android applications. When users encounter issues, they can shake their device (or use a custom trigger) to submit a bug report that automatically includes a screenshot, device logs, network requests, and environment details, giving developers the context they need to reproduce and fix problems. The problem Instabug solves is the gap between user-reported issues and the technical detail developers need. When a user says “the app crashed,” that is not enough to debug. Instabug automatically captures the crash stack trace, the sequence of user actions leading up to it, network request history, device model, OS version, and memory state. This turns vague bug reports into actionable debugging data. Osher integrates Instabug into mobile application projects as part of our custom development services. Beyond the SDK setup, we connect Instabug’s reporting data to your project management and support tools so that crash reports and user feedback automatically create tickets in Jira, Linear, or your preferred tracker. Our integration team builds these pipelines so your development and support teams work from the same data without manual transfer.
  • Cronly

    Cronly

    Cronly is a monitoring and alerting service built specifically for n8n workflow automation. It watches your n8n workflows for failures, missed executions, and performance issues, then sends notifications through Slack, Discord, email, or webhooks when something goes wrong. If you run n8n workflows that your business depends on, Cronly tells you when they break before your customers notice. The problem Cronly solves is workflow reliability visibility. n8n is powerful for building automation, but it does not provide robust built-in monitoring for production workloads. If a scheduled workflow fails at 2am, you might not find out until someone reports missing data the next morning. Cronly watches for exactly this: failed executions, workflows that should have run but did not, and execution times that exceed your defined thresholds. At Osher, Cronly is a standard part of our n8n consulting engagements. When we build and deploy n8n workflows for clients, we set up Cronly monitoring so that both our team and yours know immediately when something needs attention. This ties into our broader process automation services where workflow reliability directly impacts business operations. We have written about n8n hosting and operations in our self-hosting guide.
  • BugReplay

    BugReplay

    BugReplay is a browser-based bug reporting tool that records screen activity along with browser console logs, network requests, and system environment details. When a tester or user encounters a bug, BugReplay captures a video of what happened on screen while simultaneously logging the technical data developers need to reproduce the issue. The result is a shareable link containing both the visual replay and the underlying technical context. The problem BugReplay solves is the gap between “what the user saw” and “what the developer needs.” Most bug reports are screenshots with vague descriptions. Developers waste time trying to reproduce issues without knowing what browser was used, what network requests failed, or what console errors appeared. BugReplay captures all of this automatically alongside the screen recording, which shortens the time from report to fix. Osher integrates BugReplay into QA and development workflows where client-facing applications are being tested. We connect BugReplay outputs to issue trackers so that development teams receive complete, actionable bug reports without needing to go back and forth with testers for missing details. This fits into our broader approach to business process automation, where reducing manual handoffs speeds up delivery cycles.
  • One AI

    One AI

    One AI is a language AI API platform that gives developers access to pre-built natural language processing (NLP) capabilities through a single unified API. Instead of training custom models, you call One AI’s endpoints to perform tasks like text summarisation, sentiment analysis, keyword extraction, topic detection, and entity recognition on any text or audio input. The problem One AI solves is that building NLP pipelines from scratch is slow and expensive. Most businesses need the same core language capabilities (summarise this document, extract these entities, classify this text), but training and hosting models requires ML engineering resources that many teams lack. One AI packages these capabilities as composable API “skills” that can be chained together in a single request. Osher connects One AI into automated data processing pipelines where unstructured text needs to be parsed, classified, or summarised at scale. For example, we have built workflows that use One AI to extract key information from incoming support emails, summarise meeting transcripts, or classify documents by topic before routing them to the right team. Our AI agent development practice uses tools like One AI as components within larger intelligent systems.
  • ConfigCat

    ConfigCat

    ConfigCat is a feature flag and remote configuration management service that lets development teams toggle features on and off, run A/B tests, and push configuration changes to applications without deploying new code. It provides SDKs for every major platform (JavaScript, Python, .NET, Java, Go, iOS, Android, and more) and a dashboard where non-technical team members can manage feature flags independently. The problem ConfigCat solves is deployment risk. Releasing new features as all-or-nothing code deployments is risky. If something breaks, you roll back the entire release. Feature flags let you wrap new code in toggles that can be switched on for a small percentage of users first, then gradually rolled out, or instantly killed if problems appear. ConfigCat makes this possible without building your own flag infrastructure. Osher integrates ConfigCat into custom application development projects where controlled feature rollouts matter. We also use feature flags within AI agent deployments to test different model configurations or prompt variations against live traffic before committing to changes. ConfigCat’s targeting rules allow us to segment users by attribute, making it useful for gradual rollouts across regions or customer tiers.
  • Pinata

    Pinata

    Pinata is a file storage and delivery platform built on IPFS (InterPlanetary File System) that gives developers APIs for uploading, pinning, and serving files through dedicated gateways. While it started as an NFT metadata hosting service, Pinata has expanded into a general-purpose file infrastructure tool that handles image optimisation, video streaming, and content delivery for any application that needs reliable, fast file hosting. The problem Pinata solves is file infrastructure complexity. Hosting files reliably, serving them quickly across regions, handling image resizing on the fly, and managing access permissions requires stitching together multiple services (cloud storage, CDN, image processing). Pinata bundles these capabilities behind a single API with both IPFS-based and traditional HTTP delivery options. Osher integrates Pinata into automated data processing workflows where files need to be stored, transformed, and served programmatically. This includes document management systems where uploaded files are automatically processed and made available through Pinata’s CDN, or applications where user-generated content needs fast, reliable delivery. Our system integration team connects Pinata’s upload and retrieval APIs to your existing application stack.
  • Rootly

    Rootly

    Rootly is an incident management platform that runs inside Slack. When something breaks in production, Rootly automates the incident response process: creating a dedicated Slack channel, paging on-call engineers, tracking status updates, coordinating communication with stakeholders, and generating post-incident reports, all without engineers leaving their primary communication tool. The problem Rootly addresses is the chaos that happens when systems go down. Without structured incident management, teams waste critical minutes figuring out who should be working on the problem, stakeholders do not know what is happening, and post-incident reviews lack the timeline data needed to prevent recurrence. Rootly brings order to this process by automating the operational overhead so engineers focus on fixing the issue. At Osher, we integrate Rootly with monitoring, alerting, and operations systems as part of our system integration work. Using n8n, we build pipelines that connect Rootly to monitoring tools (Datadog, PagerDuty, Grafana), ticketing systems (Jira, Linear), and status pages so that incidents are detected, managed, and documented automatically. We also build custom automation runbooks that Rootly can execute during incidents, reducing mean time to resolution. Rootly suits engineering and DevOps teams that manage production systems and need a structured, repeatable incident response process that works within their existing Slack-based communication workflow.
  • NMKR

    NMKR

    NMKR is a Web3 development platform built on the Cardano blockchain that lets businesses mint NFTs, create token-gated experiences, and accept crypto payments without writing smart contract code. It provides REST APIs for minting, managing, and selling NFTs programmatically, along with a no-code dashboard for teams that want to launch NFT projects quickly. The problem NMKR solves is straightforward: building on Cardano from scratch requires deep knowledge of Plutus smart contracts and the eUTXO model. NMKR abstracts that away, giving developers familiar REST endpoints to mint tokens, set up royalty structures, and manage on-chain metadata. E-commerce businesses use it to add NFT-based loyalty programs, while creative agencies use it for digital collectible drops. At Osher, we connect NMKR’s API endpoints into broader business workflows using system integration services. That might mean triggering an NFT mint when a customer completes a purchase, syncing token ownership data back to your CRM, or building automated royalty distribution pipelines. If you are exploring how blockchain fits into your product or customer experience, our AI consulting team can map out where NMKR adds real value without overcomplicating your stack.
  • Lokalise

    Lokalise

    Lokalise is a translation management system (TMS) built for software teams that need to localise apps, websites, games, and other digital products into multiple languages. It provides a web-based editor where translators work on strings in context, with integrations that connect directly to your code repositories and design tools. The problem Lokalise addresses is the disconnect between development and translation. Developers add new UI strings in code, but those strings need to reach translators, get translated, reviewed, and merged back without breaking the build. Without a TMS, this process involves manual file exports, email chains, and merge conflicts. Lokalise automates the handoff so developers keep shipping features while translations happen in parallel. At Osher, we connect Lokalise to development and content pipelines as part of our system integration work. Using n8n, we build workflows that sync strings between GitHub repositories and Lokalise, notify translators when new content arrives, and merge approved translations back into the codebase. For content-heavy sites, we also connect Lokalise to CMS platforms so marketing teams can manage multilingual content without developer involvement. Lokalise is a strong choice for development teams building multilingual SaaS products, mobile apps, or web applications who need translation workflows that keep pace with agile release cycles.
  • Cryptolens

    Cryptolens

    Cryptolens is a software licensing platform that lets developers add licence key validation, feature gating, usage tracking, and subscription management to their applications. It provides APIs and SDKs for .NET, Python, Java, C++, and other languages, so you can enforce licensing rules directly in your software without building a custom system. The problem Cryptolens solves is licence enforcement. Software companies that sell desktop applications, plugins, APIs, or on-premises tools need a way to verify that users have valid licences, restrict access to paid features, and track usage for billing. Building this from scratch is time-consuming and error-prone. Cryptolens handles key generation, validation, activation limits, expiry dates, and feature flags through a managed API. At Osher, we integrate Cryptolens into client applications and connect it to business systems as part of our system integration and custom development work. A typical project connects Cryptolens to a payment system (Stripe, Paddle) so that licence keys are generated automatically on purchase, activated when the user installs the software, and deactivated when a subscription lapses. We also build dashboards that pull usage analytics from Cryptolens for product and sales teams. Cryptolens is built for software vendors, ISVs, and development teams that distribute commercial software and need reliable licence management without building and maintaining their own licensing infrastructure.
  • Kadoa

    Kadoa

    Kadoa is an AI-powered web scraping platform that extracts structured data from websites without requiring you to write CSS selectors, XPath queries, or custom parsing code. You point Kadoa at a webpage, and its AI identifies the data structure (product listings, job postings, contact details, pricing tables) and extracts it into clean JSON or CSV format. Traditional web scraping breaks every time a website changes its layout. Kadoa reduces this maintenance burden because its AI adapts to structural changes rather than relying on hardcoded element selectors. This makes it particularly useful for ongoing data collection from sites that update their design frequently. At Osher, we use Kadoa as a data source within broader automated data processing pipelines. A typical project connects Kadoa to an n8n workflow that schedules extraction, processes the returned data, and loads it into a database or CRM for analysis. We have built competitive intelligence, market research, and lead enrichment systems using Kadoa as the extraction layer. For an example of how we build automated data pipelines, see our BOM weather data pipeline case study. Kadoa is a good fit for teams that need structured data from the web but do not have developers available to build and maintain traditional scraping scripts.
  • ZenRows

    ZenRows

    ZenRows is a web scraping API built to extract data from websites that actively block automated access. It handles anti-bot protections, CAPTCHAs, JavaScript rendering, and IP rotation behind a single API call, so your team gets clean structured data without building and maintaining scraping infrastructure. Most businesses hit the same wall: they need competitor pricing, product catalogues, or market data from websites that detect and block scrapers within minutes. ZenRows solves this by routing requests through residential proxies, rendering JavaScript-heavy pages in headless browsers, and rotating fingerprints automatically. You send a URL, you get back HTML or parsed JSON. At Osher, we connect ZenRows into broader automated data processing pipelines using n8n. A typical setup pulls data from target sites via ZenRows, cleans and transforms it through an n8n workflow, then loads it into a database or dashboard for analysis. We have built similar pipelines for insurance and property clients who needed external data feeds running reliably without manual intervention. See our BOM weather data pipeline case study for a real example of automated data extraction at scale. ZenRows is particularly useful for teams in e-commerce, market research, and competitive intelligence who need ongoing access to web data but lack the engineering resources to maintain proxy infrastructure and bypass logic themselves.
  • TestMonitor

    TestMonitor

    TestMonitor is a cloud-based test management platform that gives QA teams a structured way to plan test cases, track execution, and report on results. It replaces spreadsheet-based testing workflows with a purpose-built system that ties test cases to requirements and defects in one place. The core problem TestMonitor solves is visibility. When testing happens in spreadsheets or disconnected tools, project managers lose track of what has been tested, what passed, what failed, and what is blocking release. TestMonitor provides dashboards and reports that answer those questions without chasing people for status updates. At Osher, we integrate TestMonitor into development and deployment workflows so that test results feed directly into project tracking and release management systems. Using n8n, we connect TestMonitor’s API to tools like Jira, Slack, and CI/CD pipelines, creating automated notifications when tests fail and blocking deployments until critical test suites pass. Our system integration services cover the full pipeline from test execution through to release gating. TestMonitor is a good fit for organisations running manual or semi-automated testing who need better traceability between requirements, test cases, and defects without adopting a heavyweight enterprise ALM platform.
  • Transifex

    Transifex

    Transifex is a cloud-based localisation platform that manages the translation of software interfaces, websites, mobile apps, and documentation into multiple languages. It provides a centralised workspace where translators, reviewers, and developers collaborate on multilingual content without passing files back and forth manually. The core problem Transifex solves is coordination. When a business operates in multiple languages, every product update, marketing page, or support article needs translating. Without a proper system, translation requests get lost in email chains, version conflicts arise when multiple people edit the same file, and shipped products end up with missing or outdated translations. Transifex eliminates this by giving everyone a single source of truth for translation status. At Osher, we integrate Transifex into development and content workflows using n8n and API connections. A typical setup automatically pushes new or changed strings from a codebase or CMS to Transifex, notifies translators, and pulls completed translations back into the application once approved. Our system integration team builds these pipelines so that localisation happens continuously rather than as a manual batch process before each release. Transifex works well for software companies, SaaS platforms, and content-heavy businesses that need to maintain products and websites in multiple languages without dedicated localisation engineers on staff.