Integrations¤
The following services and applications can be easily integrated in Corporate Memory workflows:
-
Anthropic / Claude
Use the Execute Instructions or Create Embeddings task to interact with any Anthropic / Claude provided Large Language Models (LLMs).
-
Avro
Use the Avro dataset to read and write files in the Avro format.
-
Azure AI Foundry
Use the Execute Instructions or Create Embeddings task to interact with any Azure AI Foundry provided Large Language Models (LLMs).
-
CSV
Comma-separated values (CSV) is a text data format which can be processed (read and write) with the CSV Dataset.
-
eMail / SMTP
Send plain text or HTML formatted eMail messages using an SMTP server.
-
Excel
Use the Excel task to read and write to Excel workbooks in the Open XML format (XLSX).
-
Google Drive
Use the Excel (Google Drive) to read and write to Excel workbooks in Google Drive.
-
GraphQL
You can execute a GraphQL query and process the result in a workflow.
-
Hive
Read from or write to an embedded Apache Hive database endpoint.
-
Jira
Execute a JQL query on a Jira instance to fetch and integrate issue data.
-
JSON
Use the JSON dataset to read and write JSON files (JavaScript Object Notation).
-
JSON Lines
Use the JSON dataset to read and write files in the JSON Lines text file format.
-
Kafka
You can send and receive messages to and from a Kafka topic.
-
Kubernetes
You can Execute a command in a kubernetes pod and captures its output to process it.
-
MariaDB
MariaDB can be accessed with the JDBC endpoint dataset and a JDBC driver.
-
Mattermost
Send workflow reports or any other message to user and groups in you Mattermost with the Send Mattermost messages task.
-
Microsoft SQL
The Microsoft SQL Server can be accessed with the JDBC endpoint dataset and a JDBC driver.
-
MySQL
MySQL can be accessed with the JDBC endpoint dataset and a JDBC driver.
-
Neo4J
Use the Neo4j dataset for reading and writing Neo4j graphs.
-
Nextcloud
Use a Nextcloud instance to download files to process them or upload files you created with Corporate Memory.
-
Office 365
Use the Excel (OneDrive, Office365) to read and write to Excel workbooks in Office 365.
-
Ollama
Use the Execute Instructions or Create Embeddings task to interact with Ollama provided Large Language Models (LLMs).
-
OpenAI
Use the Execute Instructions or Create Embeddings task to interact with any OpenAI provided Large Language Models (LLMs).
-
OpenRouter
Use the Execute Instructions or Create Embeddings task to interact with any OpenRouter provided Large Language Models (LLMs).
-
ORC
Use the ORC dataset to read and write files in the ORC format.
-
Parquet
Use the Parquet dataset to read and write files in the Parquet format.
-
pgvector
Store vector embeddings into pgvector using the Search Vector Embeddings.
-
PostgreSQL
PostgreSQL can be accessed with the JDBC endpoint dataset and a JDBC driver.
-
PowerBI
Leverage your Knowledge Graphs in PowerBI using our Corporate Memory Power-BI-Connector.
-
RDF
Use the RDF file dataset to read and write files in the RDF formats (N-Quads, N-Triples, Turtle, RDF/XML or RDF/JSON).
-
REST
Execute REST requests using Execute REST requests.
-
Salesforce
Interact with your Salesforce data, such as Create/Update Salesforce Objects or execute a SOQL query (Salesforce).
-
Snowflake
Snowflake can be accessed with the Snowflake JDBC endpoint dataset and a JDBC driver.
-
Spark
Apply a Spark function to a specified field using Execute Spark function.
-
SQLite
SQLite can be accessed with the JDBC endpoint dataset and a JDBC driver.
-
SSH
Interact with SSH servers to Download SSH files or Execute commands via SSH.
-
Trino
Trino can be access with the JDBC endpoint dataset and a JDBC driver.
-
XML
Load and write data to XML files with the XML dataset as well as Parse XML from external services.
-
YAML
Load and integrate data from YAML files with the Parse YAML task.
-
Zipped JSON
Use the JSON dataset to read and write JSON files in a ZIP Archive.