All Configuration Options
Below is a table containing all of the configurable options within the Semi-Structured Transformer. These can be set when configuring your stack or on an already running Transformer.
To do this, simply call the endpoint /updateConfig?configEntry=<entry>&configValue=<value> where the entry the config item as seen below, and value is the new value you wish to set. Any configuration changed while the Transformer is running can also be backed up and restored.
Transformer Configuration
Environment Variable
Entry
Default Value
Description
FRIENDLY_NAME
friendlyName
Semi-Structured Transformer
The name you wish to call your Transformer.
TRANFORMER_LICENSE
transformerLicense
The License key provided required for running the Transformer. Only required when running a non AWS Marketplace version of the Transformer.
TRANSFORMER_DIRECTORY
transformerDirectory
file:///var/local/
This is the directory where all Transformer files are stored (assuming individual file dir config haven’t been edited). On Transformer startup, if this has been declared, it will create folders at the specified location for mapping, output, yaml-mapping, prov output, and config backup.
MAPPINGS_DIR_URL
mappingsDirUrl
file:///var/local/mapping/
The URL of the directory containing the mapping file(s). Can be local or remote, see here for more details.
OUTPUT_DIR_URL
outputDirUrl
file:///var/local/output/
The URL of the directory you wish the generated RDF to be output to. Can be local or remote
OUTPUT_FILE_FORMAT
outputFileFormat
nquads
The file type that will be constructed when the RDF is created. The options are: nquads, ntriples, jsonld, turtle, trig, and trix.
CONFIG_BACKUP
configBackup
file:///var/local/config-backup/
The URL directory where the config will be backed up to when calling the upload config endpoint
MAX_CSV_ROWS
maxCsvRows
100000
The maximum number of rows a CSV file can be before it is split into smaller files (of specified length) and processed individually.
VALIDATE_CSV
validateCsv
true
By default, the Transformer will check a CSV file is valid and remove any invalid or multiline rows. To turn this off, set this property to false.
TRANSFORMER_RUN_STANDALONE
runStandalone
true
Each of the Transformers are designed to run as part of a larger end to end system with the end result being graph data uploaded to a Knowledge or Property Graph. As part of this process, Kafka is used to communicate between services. This is enabled by default, however if you want to run the Transformer as standalone without communicating to other services, set this property to true.
CUSTOM_FUNCTION_JAR_URL
customFunctionJarUrl
If you require a function to be executed that doesn’t perform the required operation using the built-in functions, it is possible to create and use your own. To do this, set this variable to the URL of your jar file containing the functions (S3 is also supported), and follow the instructions laid out in this guide.
CUSTOM_FUNCTION_TTL_URL
customFunctionTtlUrl
If you require a function to be executed that doesn’t perform the required operation using the built-in functions, it is possible to create and use your own. To do this, set this variable to the URL of your ttl file containing the mappings to your functions (S3 is also supported), and follow the instructions laid out in this guide.
AWS Configuration
One of the methods for connecting to AWS and S3 on a locally running Transformer is by using environment variables to set the region, access key, and secret key. This must be done when running your docker container. Alternate methods can be found in the AWS documentation.
Environment Variable
Description
AWS_REGION
The region in AWS where your S3 buckets and files reside, for example “us-east-1”.
AWS_ACCESS_KEY_ID
Your access key for AWS.
AWS_SECRET_ACCESS_KEY
Your secret key for AWS.
Property Graph Configuration
Environment Variable
Entry
Default Value
Description
PROPERTY_GRAPH_MODE
propertyGraphMode
false
If you are using a Property Graph as your target graph type, set this configuration to true, otherwise leave as false. When set to true, the Transformer will output Nodes and Edges CSV files instead of RDF files.
PG_GRAPH
pgGraph
default
Set this property to the Property Graph provider you wish to use: neptune-gremlin, neptune-cypher, tigergraph or neo4j. By default, nodes and edges files will be created, however the previously mentioned graphs require specific file shapes. Select default if you wish to use the Graph Writer to write to your property graph.
Kafka Configuration
Environment Variable
Entry
Default Value
Description
KAFKA_BROKERS
kafkaBrokers
localhost:9092
The Kafka Broker is what tells the Transformer where to look for your Kafka Cluster. Set with the following structure <kafka-ip>:<kafka-port>. The recommended port is 9092.
KAFKA_TOPIC_NAME_SOURCE
topicNameSource
source_urls
The topic used for the Consumer to read messages from containing input file URLs in order to ingest data.
KAFKA_TOPIC_NAME_DLQ
topicNameDLQ
dead_letter_queue
The topic used to push messages containing reasons for failure within the Transformer. These messages are represented as JSON.
KAFKA_TOPIC_NAME_SUCCESS
topicNameSuccess
success_queue
The topic used for the messages sent containing the file URLs of the successfully transformed graph data files.
KAFKA_GROUP_ID_CONFIG
groupIdConfig
consumerGroup1
The identifier of the group this consumer belongs to.
KAFKA_AUTO_OFFSET_RESET_CONFIG
autoOffsetResetConfig
earliest
What to do when there is no initial offset in Kafka or if an offset is out of range.
earliest: automatically reset the offset to the earliest offset
latest: automatically reset the offset to the latest offset
KAFKA_MAX_POLL_RECORDS
maxPollRecords
100
The maximum number of records returned in a single call to poll.
KAFKA_TIMEOUT
timeout
1000
Kafka consumer polling time out.
Provenance Configuration
Environment Variable
Entry
Default Value
Description
RECORD_PROVO
recordProvo
true
Parameter indicating whether any provenance meta-data should be generated
PROV_OUTPUT_DIR_URL
provOutputDirUrl
file:///var/local/prov-output/
The URL of the directory for the provenance meta-data
PROV_KAFKA_BROKERS
provKafkaBrokers
localhost:9092
This is the location of your Kafka Cluster for provenance. This can be the same or different as your broker for the Transformer
PROV_KAFKA_TOPIC_NAME_DLQ
provTopicNameDLQ
prov_dead_letter_queue
The topic used for your dead letter queue provenance messages. This can be the same or different as your DLQ topic for the Transformer
PROV_KAFKA_TOPIC_NAME_SUCCESS
provTopicNameSuccess
prov_success_queue
The topic used for the messages sent containing the file URLs of the successfully generated provenance files. This can be the same or different as your success queue topic for the Transformer
SWITCHED_OFF_ACTIVITIES
switchedOffActivities
A comma-separated list of the provenance processes you which to turn off.
The Transformer contains the following processes: main-execution, kafkaActivity, and transformer-iteration
Logging Configuration
Environment Variable
Default Value
Description
LOG_LEVEL_TRANSFORMER
INFO
Log level for Transformer loggers - change to DEBUG to see more in depth logs, or to WARN or ERROR to quiet the logging.
Additional Logging Configuration
Environment Variable
Default Value
Description
Environment Variable
Default Value
Description
LOGGING_LEVEL
WARN
Global log level
LOGGING_APPENDERS_CONSOLE_TIMEZONE
UTC
Timezone for console logging
LOGGING_APPENDERS_TXT_FILE_THRESHOLD
ALL
Threashold for text logging
Log Format (not overridable)
%-6level [%d{HH:mm:ss.SSS}] [%t] %logger{5} - %X{code} %msg %n
Pattern for logging messages
Current Log Filename (not overridable)
/var/log/graphbuild/text/current/application_${applicationName}_${timeStamp}.txt.log
Pattern for log file name
LOGGING_APPENDERS_TXT_FILE_ARCHIVE
true
Archive log text files
Archived Log Filename Pattern (not overridable)
/var/log/graphbuild/text/archive/application_${applicationName}_${timeStamp}_to_%d{yyyy-MM-dd}.txt.log
Log file rollover frequency depends on pattern in following property. For example %d{yyyy-MM-ww} declares rollover weekly
LOGGING_APPENDERS_TXT_FILE_ARCHIVED_TXT_FILE_COUNT
7
Max number of archived text files
LOGGING_APPENDERS_TXT_FILE_TIMEZONE
UTC
Timezone for text file logging
LOGGING_APPENDERS_JSON_FILE_THRESHOLD
ALL
Threashold for text logging
Log Format (not overridable)
%-6level [%d{HH:mm:ss.SSS}] [%t] %logger{5} - %X{code} %msg %n
Pattern for logging messages
Current Log Filename (not overridable)
/var/log/graphbuild/json/current/application_${applicationName}_${timeStamp}.json.log
Pattern for log file name
LOGGING_APPENDERS_JSON_FILE_ARCHIVE
true
Archive log text files
Archived Log Filename Pattern (not overridable)
/var/log/graphbuild/json/archive/application_${applicationName}_${timeStamp}_to_%d{yyyy-MM-dd}.json.log
Log file rollover frequency depends on pattern in following property. For example %d{yyyy-MM-ww} declares rollover weekly
LOGGING_APPENDERS_JSON_FILE_ARCHIVED_FILE_COUNT
7
Max number of archived text files
LOGGING_APPENDERS_JSON_FILE_TIMEZONE
UTC
Timezone for text file logging
LOGGING_APPENDERS_JSON_FILE_LAYOUT_TYPE
json
The layout type for the json logger
Last updated

