User Guide > Introduction > About This User Guide
Was this helpful?
About This User Guide
The following table provides information about the main sections in this document.
Section Name
Description
Provides information about different components of Actian DataConnect Studio and its User Interface and helps you to get started.
Provides guidance and best practices for users of Actian DataConnect Studio IDE to effectively use the design tools and create efficient integration templates and solutions.
Provides information about importing prior version projects and artifacts. You will learn to use the import wizard and associated options.
Provides information about the customization and settings that can be performed using the Preferences dialog box. This dialog box can be accessed from Options > Preferences.
Provides information about managing workspaces, projects, and integration artifacts.
Provides information about creating and managing macro sets and macros.
Describes the Process Editor that helps you to define integration steps as a flowchart so that you can run the entire workflow sequence as a single unit. Process designs can range in complexity from simple integration flows to parallel processing of large data loads or pushing messages back and forth across message queues.
Provides information about health care integration that supports the exchange of electronic data among diverse and independent medical applications. Unlike closely coupled client-server type applications, messages are received, and delivered through TCP/IP or FTP.
Provides information about how the Data Profiler engine can be used to run profiles created in the Data Profile Editor, from the command line. There are a set of options or command line parameters available to override various properties, macros values, etc.
Provides information to execute Data Profiling rules (specific to an individual use case or project) that are created in the Data Profile Editor. This helps users to quickly determine quality levels of the source data, identify the types of problems, and reduce issues resulting from propagating bad data to down-stream systems and applications.
Describes how to use the Map Editor to design and execute a data transformation map. The data transformation maps can be designed and saved for future use. These map files can be executed interactively in the Map Editor environment and are suitable for ad-hoc testing and prototyping. The map files can also be executed using the command line interface and can be packaged and scheduled using the Integration Manager or through a custom application.
Provides information about specialized components called connectors that are used by data transformation maps. Connectors facilitate data transformation by reading data from a source (source connectors), converting it to a specific format, and then writing the data into a target (target connectors). These connectors are plug-and-play, flexible, and highly efficient. They easily transform bulk quantities of data in real-time and facilitate migration.
Provides information about setting encoding in connector properties, EZscript functions, and objects.
Provides information about the Schema Editor using which you can create or modify the data structure and record types for sources, lookups, temporary targets, and targets in a map. The resulting XML based file includes the schema, record recognition rules, and record validation rules information and is stored with the .schema extension.
Provides information about EZscript which is a scripting language that allows you to write your own scripts and expressions to include with the processes and transformations. A script contains a code snippet. The scripting engine can directly evaluate the script to produce the required result.
Provides information about using the Extract Editor that allows you to markup unstructured data, extract the required data fields from various lines in the file, and assemble the fields into a flat data record.
Describes the Content Extraction Language (CXL) which is an AWK-like line oriented programming language. Its purpose is to recognize and extract structured fields of data from specific lines of incoming text files, and assemble those fields into a flat record of data which it passes on to a subsequent process, for example a Map Designer.
Describes the Engine Profiler that provides graphical views that display the amount of processor time (in milliseconds) used by each event or function call in the transformation or process designs.
Provides information about the package manager which is used to create and deploy packages from existing projects. A package is an archive file that contains all the artifacts that are required to run a map or process.
Provides information about the Actian DataConnect Runtime Engine which is an embedded, high-performance, cross-platform engine that executes the integration artifacts created using the Actian DataConnect Studio IDE or Studio SDK on Windows or Linux servers. The same portable integration package (.djar file) can be run on any of these platforms with no code changes.
Describes how to use the Actian DataConnect Design Templates that allow new users to quickly get started by importing a DataConnect project that contains pre-built, runnable integration process designs.
Provides information about error codes and messages for the data integration platform.
Provides troubleshooting tips and workaround for known problems.
Provides a list of Frequently Asked Questions (FAQs).
Last modified date: 02/09/2024