Skip to main content
Business+ and higher

Workflow Automation

Orchestrate multi-step file processing pipelines with a visual builder or custom code. Conditional branching, loops, sandboxed ETL, cron scheduling, and distributed workers — all with a full audit trail.

Visual BuilderConditional LogicSandboxed ETLCron Scheduling

No-Code Workflows

Build sophisticated file processing pipelines visually — no scripting required.

MnemoShare's workflow engine is a full pipeline runtime, not light automation. Conditional branching, for-each loops, error handling strategies, and second-level cron precision give teams the power of code without writing any.

  • Visual drag-and-drop builder with real-time validation
  • 13+ built-in step types: detect file type, unarchive, upload, generate links, send email, call webhook/API, conditional branching, for-each loops, and more
  • Cron scheduling with second-level precision for time-critical processing
  • Go template system for dynamic values — reference outputs of previous steps
  • Per-step error handling strategies: stop, continue, or goto a specific step
  • Event-driven triggers: file upload events and source polling

Custom Code Execution

Run arbitrary scripts in sandboxed Kubernetes Jobs with full isolation, resource limits, and RBAC.

Push scripts to GitHub — MnemoShare pulls and runs them in isolated K8s namespaces. Build ETL pipelines that ingest, validate, transform, and load data, all with the same audit trail and compliance controls as no-code workflows.

  • Sandboxed Kubernetes namespace with full isolation and configurable resource limits (CPU, memory, timeout)
  • Push scripts to GitHub — MnemoShare pulls and runs them automatically
  • Build ETL pipelines: ingest, validate, transform, and load with multi-step orchestration
  • Same audit trail and compliance controls as no-code workflows
  • File transformation and archive processing: ZIP, TAR, GZIP, BZIP2, and XZ
  • Database query and update steps for SQL and MongoDB
  • Apache Tika metadata extraction for Office documents, PDFs, and 1,400+ file formats

Distributed Architecture

Scale workflow execution horizontally across Kubernetes pods with leader election and Redis coordination.

  • Leader-election based multi-worker support — only one worker owns the schedule
  • Redis-backed job queuing via Asynq for reliable, distributed execution
  • Real-time transfer dashboard with live status updates
  • Horizontal scaling across K8s pods — add workers to handle increasing load

Beyond traditional MFT

Most managed file transfer platforms were designed before modern threats existed. Here is how MnemoShare compares.

CapabilityTraditional MFTMnemoShare
SchedulingBasic cron or manual triggersSecond-level cron precision + event-driven + source polling
LogicLinear job executionConditional branching, loops, goto, skip operations
Code executionNot availableSandboxed Kubernetes Jobs with resource limits and RBAC
ProcessingMove files point-to-pointMulti-step pipelines: detect, unarchive, transform, route, notify
ScalingSingle-server executionDistributed workers with leader election and Redis coordination

Real-world use cases

Healthcare claims processing

Ingest EDI files from SFTP, detect file type, validate structure, transform to target format, upload to claims system, and notify the processing team — all in a single automated pipeline.

Financial document intake

Poll partner SFTP for loan documents, filter by type (PDF only), store on MnemoShare with retention policy, generate secure download links, and email the underwriting team automatically.

Data transformation pipeline

Extract archives, run custom validation scripts in a sandbox, enrich with metadata via Tika, load into a downstream database, and log every step for audit compliance.

Frequently asked questions

Does MnemoShare support complex workflow logic like conditions and loops?
Yes, the workflow engine supports conditional branching with field comparison operators, regex validation, goto/skip operations, and for-each loops for processing arrays of files. This goes far beyond basic scheduling.
Can I run custom scripts as part of a workflow?
Yes. MnemoShare executes custom scripts in sandboxed Kubernetes Jobs with full isolation, configurable resource limits (CPU, memory, timeout), and RBAC. Push code to GitHub and MnemoShare pulls and runs it.
How does workflow scheduling work?
MnemoShare uses cron-based scheduling with second-level precision, event-driven triggers (file upload, source polling), and manual triggers. Distributed scheduling uses leader election so only one worker owns the schedule.
What file formats can workflows process?
Workflows handle ZIP (including encrypted), TAR, GZIP, BZIP2, XZ archives with automatic detection and extraction. Apache Tika integration provides metadata extraction for Office documents, PDFs, and 1,400+ file formats.

Ready to see MnemoShare in action?

Start a free trial, schedule a walkthrough, or dive into the docs.