HugeGraph Config Options

Gremlin Server Config Options

Corresponding configuration file gremlin-server.yaml

config optiondefault valuedescription
host127.0.0.1The host or ip of Gremlin Server.
port8182The listening port of Gremlin Server.
graphshugegraph: conf/hugegraph.propertiesThe map of graphs with name and config file path.
scriptEvaluationTimeout30000The timeout for gremlin script execution(millisecond).
channelizerorg.apache.tinkerpop.gremlin.server.channel.HttpChannelizerIndicates the protocol which the Gremlin Server provides service.
authenticationauthenticator: org.apache.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties}The authenticator and config(contains tokens path) of authentication mechanism.

Rest Server & API Config Options

Corresponding configuration file rest-server.properties

config optiondefault valuedescription
graphs[hugegraph:conf/hugegraph.properties]The map of graphs’ name and config file.
server.idserver-1The id of rest server, used for license verification.
server.rolemasterThe role of nodes in the cluster, available types are [master, worker, computer]
restserver.urlhttp://127.0.0.1:8080The url for listening of rest server.
ssl.keystore_fileserver.keystoreThe path of server keystore file used when https protocol is enabled.
ssl.keystore_passwordThe password of the path of the server keystore file used when the https protocol is enabled.
restserver.max_worker_threads2 * CPUsThe maximum worker threads of rest server.
restserver.min_free_memory64The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
restserver.request_timeout30The time in seconds within which a request must complete, -1 means no timeout.
restserver.connection_idle_timeout30The time in seconds to keep an inactive connection alive, -1 means no timeout.
restserver.connection_max_requests256The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
gremlinserver.urlhttp://127.0.0.1:8182The url of gremlin server.
gremlinserver.max_route8The max route number for gremlin server.
gremlinserver.timeout30The timeout in seconds of waiting for gremlin server.
batch.max_edges_per_batch500The maximum number of edges submitted per batch.
batch.max_vertices_per_batch500The maximum number of vertices submitted per batch.
batch.max_write_ratio50The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
batch.max_write_threads0The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
auth.authenticatorThe class path of authenticator implementation. e.g., org.apache.hugegraph.auth.StandardAuthenticator, or org.apache.hugegraph.auth.ConfigAuthenticator.
auth.admin_token162f7848-0b6d-4faf-b557-3a0797869c55Token for administrator operations, only for org.apache.hugegraph.auth.ConfigAuthenticator.
auth.graph_storehugegraphThe name of graph used to store authentication information, like users, only for org.apache.hugegraph.auth.StandardAuthenticator.
auth.user_tokens[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31]The map of user tokens with name and password, only for org.apache.hugegraph.auth.ConfigAuthenticator.
auth.audit_log_rate1000.0The max rate of audit log output per user, default value is 1000 records per second.
auth.cache_capacity10240The max cache capacity of each auth cache item.
auth.cache_expire600The expiration time in seconds of vertex cache.
auth.remote_urlIf the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’.
auth.token_expire86400The expiration time in seconds after token created
auth.token_secretFXQXbJtbCLxODc6tGci732pkH1cyf8QgSecret key of HS256 algorithm.
exception.allow_tracefalseWhether to allow exception trace stack.
memory_monitor.threshold0.85The threshold of JVM(in-heap) memory usage monitoring , 1 means disabling this function.
memory_monitor.period2000The period in ms of JVM(in-heap) memory usage monitoring.

Basic Config Options

Basic Config Options and Backend Config Options correspond to configuration files:{graph-name}.properties, such as hugegraph.properties

config optiondefault valuedescription
gremlin.graphorg.apache.hugegraph.HugeFactoryGremlin entrance to create graph.
backendrocksdbThe data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
serializerbinaryThe serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
storehugegraphThe database name like Cassandra Keyspace.
store.connection_detect_interval600The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
store.graphgThe graph table name, which store vertex, edge and property.
store.schemamThe schema table name, which store meta data.
store.systemsThe system table name, which store system data.
schema.illegal_name_regex.\s+$|~.The regex specified the illegal format for schema name.
schema.cache_capacity10000The max cache size(items) of schema cache.
vertex.cache_typel2The type of vertex cache, allowed values are [l1, l2].
vertex.cache_capacity10000000The max cache size(items) of vertex cache.
vertex.cache_expire600The expire time in seconds of vertex cache.
vertex.check_customized_id_existfalseWhether to check the vertices exist for those using customized id strategy.
vertex.default_labelvertexThe default vertex label.
vertex.tx_capacity10000The max size(items) of vertices(uncommitted) in transaction.
vertex.check_adjacent_vertex_existfalseWhether to check the adjacent vertices of edges exist.
vertex.lazy_load_adjacent_vertextrueWhether to lazy load adjacent vertices of edges.
vertex.part_edge_commit_size5000Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
vertex.encode_primary_key_numbertrueWhether to encode number value of primary key in vertex id.
vertex.remove_left_index_at_overwritefalseWhether remove left index at overwrite.
edge.cache_typel2The type of edge cache, allowed values are [l1, l2].
edge.cache_capacity1000000The max cache size(items) of edge cache.
edge.cache_expire600The expiration time in seconds of edge cache.
edge.tx_capacity10000The max size(items) of edges(uncommitted) in transaction.
query.page_size500The size of each page when querying by paging.
query.batch_size1000The size of each batch when querying by batch.
query.ignore_invalid_datatrueWhether to ignore invalid data of vertex or edge.
query.index_intersect_threshold1000The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
query.ramtable_edges_capacity20000000The maximum number of edges in ramtable, include OUT and IN edges.
query.ramtable_enablefalseWhether to enable ramtable for query of adjacent edges.
query.ramtable_vertices_capacity10000000The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
query.optimize_aggregate_by_indexfalseWhether to optimize aggregate query(like count) by index.
oltp.concurrent_depth10The min depth to enable concurrent oltp algorithm.
oltp.concurrent_threads10Thread number to concurrently execute oltp algorithm.
oltp.collection_typeECThe implementation type of collections used in oltp algorithm.
rate_limit.read0The max rate(times/s) to execute query of vertices/edges.
rate_limit.write0The max rate(items/s) to add/update/delete vertices/edges.
task.wait_timeout10Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
task.input_size_limit16777216The job input size limit in bytes.
task.result_size_limit16777216The job result size limit in bytes.
task.sync_deletionfalseWhether to delete schema or expired data synchronously.
task.ttl_delete_batch1The batch size used to delete expired data.
computer.config/conf/computer.yamlThe config file path of computer job.
search.text_analyzerikanalyzerChoose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer]. if use ‘ikanalyzer’, need download jar from ‘https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
search.text_analyzer_modesmartSpecify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}.
snowflake.datacenter_id0The datacenter id of snowflake id generator.
snowflake.force_stringfalseWhether to force the snowflake long id to be a string.
snowflake.worker_id0The worker id of snowflake id generator.
raft.modefalseWhether the backend storage works in raft mode.
raft.safe_readfalseWhether to use linearly consistent read.
raft.use_snapshotfalseWhether to use snapshot.
raft.endpoint127.0.0.1:8281The peerid of current raft node.
raft.group_peers127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283The peers of current raft group.
raft.path./raft-logThe log path of current raft node.
raft.use_replicator_pipelinetrueWhether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent.
raft.election_timeout10000Timeout in milliseconds to launch a round of election.
raft.snapshot_interval3600The interval in seconds to trigger snapshot save.
raft.backend_threadscurrent CPU v-coresThe thread number used to apply task to backend.
raft.read_index_threads8The thread number used to execute reading index.
raft.apply_batch1The apply batch size to trigger disruptor event handler.
raft.queue_size16384The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
raft.queue_publish_timeout60The timeout in second when publish event into disruptor.
raft.rpc_threads80The rpc threads for jraft RPC layer.
raft.rpc_connect_timeout5000The rpc connect timeout for jraft rpc.
raft.rpc_timeout60000The rpc timeout for jraft rpc.
raft.rpc_buf_low_water_mark10485760The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
raft.rpc_buf_high_water_mark20971520The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
raft.read_strategyReadOnlyLeaseBasedThe linearizability of read strategy.

RPC server Config Options

config optiondefault valuedescription
rpc.client_connect_timeout20The timeout(in seconds) of rpc client connect to rpc server.
rpc.client_load_balancerconsistentHashThe rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters.
rpc.client_read_timeout40The timeout(in seconds) of rpc client read from rpc server.
rpc.client_reconnect_period10The period(in seconds) of rpc client reconnect to rpc server.
rpc.client_retries3Failed retry number of rpc client calls to rpc server.
rpc.config_order999Sofa rpc configuration file loading order, the larger the more later loading.
rpc.logger_implcom.alipay.sofa.rpc.log.SLF4JLoggerImplSofa rpc log implementation class.
rpc.protocolboltRpc communication protocol, client and server need to be specified the same value.
rpc.remote_urlThe remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled.
rpc.server_adaptive_portfalseWhether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
rpc.server_hostThe hosts/ips bound by rpc server to provide services, empty value means not enabled.
rpc.server_port8090The port bound by rpc server to provide services.
rpc.server_timeout30The timeout(in seconds) of rpc server execution.

Cassandra Backend Config Options

config optiondefault valuedescription
backendMust be set to cassandra.
serializerMust be set to cassandra.
cassandra.hostlocalhostThe seeds hostname or ip address of cassandra cluster.
cassandra.port9042The seeds port address of cassandra cluster.
cassandra.connect_timeout5The cassandra driver connect server timeout(seconds).
cassandra.read_timeout20The cassandra driver read from server timeout(seconds).
cassandra.keyspace.strategySimpleStrategyThe replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
cassandra.keyspace.replication[3]The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’.
cassandra.usernameThe username to use to login to cassandra cluster.
cassandra.passwordThe password corresponding to cassandra.username.
cassandra.compression_typenoneThe compression algorithm of cassandra transport: none/snappy/lz4.
cassandra.jmx_port=71997199The port of JMX API service for cassandra.
cassandra.aggregation_timeout43200The timeout in seconds of waiting for aggregation.

ScyllaDB Backend Config Options

config optiondefault valuedescription
backendMust be set to scylladb.
serializerMust be set to scylladb.

Other options are consistent with the Cassandra backend.

RocksDB Backend Config Options

config optiondefault valuedescription
backendMust be set to rocksdb.
serializerMust be set to binary.
rocksdb.data_disks[]The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]
rocksdb.data_pathrocksdb-data/dataThe path for storing data of RocksDB.
rocksdb.wal_pathrocksdb-data/walThe path for storing WAL of RocksDB.
rocksdb.allow_mmap_readsfalseAllow the OS to mmap file for reading sst tables.
rocksdb.allow_mmap_writesfalseAllow the OS to mmap file for writing.
rocksdb.block_cache_capacity8388608The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
rocksdb.bloom_filter_bits_per_key-1The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
rocksdb.bloom_filter_block_based_modefalseUse block based filter rather than full filter.
rocksdb.bloom_filter_whole_key_filteringtrueTrue if place whole keys in the bloom filter, else place the prefix of keys.
rocksdb.bottommost_compressionNO_COMPRESSIONThe compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.bulkload_modefalseSwitch to the mode to bulk load data into RocksDB.
rocksdb.cache_index_and_filter_blocksfalseIndicating if we’d put index/filter blocks to the block cache.
rocksdb.compaction_styleLEVELSet compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
rocksdb.compressionSNAPPY_COMPRESSIONThe compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.compression_per_level[NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION]The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.delayed_write_rate16777216The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
rocksdb.log_levelINFOThe info log level of RocksDB.
rocksdb.max_background_jobs8Maximum number of concurrent background jobs, including flushes and compactions.
rocksdb.level_compaction_dynamic_level_bytesfalseWhether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended.
rocksdb.max_bytes_for_level_base536870912The upper-bound of the total size of level-1 files in bytes.
rocksdb.max_bytes_for_level_multiplier10.0The ratio between the total size of level (L+1) files and the total size of level L files for all L.
rocksdb.max_open_files-1The maximum number of open files that can be cached by RocksDB, -1 means no limit.
rocksdb.max_subcompactions4The value represents the maximum number of threads per compaction job.
rocksdb.max_write_buffer_number6The maximum number of write buffers that are built up in memory.
rocksdb.max_write_buffer_number_to_maintain0The total maximum number of write buffers to maintain in memory.
rocksdb.min_write_buffer_number_to_merge2The minimum number of write buffers that will be merged together.
rocksdb.num_levels7Set the number of levels for this database.
rocksdb.optimize_filters_for_hitsfalseThis flag allows us to not store filters for the last level.
rocksdb.optimize_modetrueOptimize for heavy workloads and big datasets.
rocksdb.pin_l0_filter_and_index_blocks_in_cachefalseIndicating if we’d put index/filter blocks to the block cache.
rocksdb.sst_pathThe path for ingesting SST file into RocksDB.
rocksdb.target_file_size_base67108864The target file size for compaction in bytes.
rocksdb.target_file_size_multiplier1The size ratio between a level L file and a level (L+1) file.
rocksdb.use_direct_io_for_flush_and_compactionfalseEnable the OS to use direct read/writes in flush and compaction.
rocksdb.use_direct_readsfalseEnable the OS to use direct I/O for reading sst tables.
rocksdb.write_buffer_size134217728Amount of data in bytes to build up in memory.
rocksdb.max_manifest_file_size104857600The max size of manifest file in bytes.
rocksdb.skip_stats_update_on_db_openfalseWhether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
rocksdb.max_file_opening_threads16The max number of threads used to open files.
rocksdb.max_total_wal_size0Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
rocksdb.db_write_buffer_size0Total size of write buffers in bytes across all column families, 0 means no limit.
rocksdb.delete_obsolete_files_period21600The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
rocksdb.hard_pending_compaction_bytes_limit274877906944The hard limit to impose on pending compaction in bytes.
rocksdb.level0_file_num_compaction_trigger2Number of files to trigger level-0 compaction.
rocksdb.level0_slowdown_writes_trigger20Soft limit on number of level-0 files for slowing down writes.
rocksdb.level0_stop_writes_trigger36Hard limit on number of level-0 files for stopping writes.
rocksdb.soft_pending_compaction_bytes_limit68719476736The soft limit to impose on pending compaction in bytes.

HBase Backend Config Options

config optiondefault valuedescription
backendMust be set to hbase.
serializerMust be set to hbase.
hbase.hostslocalhostThe hostnames or ip addresses of HBase zookeeper, separated with commas.
hbase.port2181The port address of HBase zookeeper.
hbase.threads_max64The max threads num of hbase connections.
hbase.znode_parent/hbaseThe znode parent path of HBase zookeeper.
hbase.zk_retry3The recovery retry times of HBase zookeeper.
hbase.aggregation_timeout43200The timeout in seconds of waiting for aggregation.
hbase.kerberos_enablefalseIs Kerberos authentication enabled for HBase.
hbase.kerberos_keytabThe HBase’s key tab file for kerberos authentication.
hbase.kerberos_principalThe HBase’s principal for kerberos authentication.
hbase.krb5_confetc/krb5.confKerberos configuration file, including KDC IP, default realm, etc.
hbase.hbase_site/etc/hbase/conf/hbase-site.xmlThe HBase’s configuration file
hbase.enable_partitiontrueIs pre-split partitions enabled for HBase.
hbase.vertex_partitions10The number of partitions of the HBase vertex table.
hbase.edge_partitions30The number of partitions of the HBase edge table.

MySQL & PostgreSQL Backend Config Options

config optiondefault valuedescription
backendMust be set to mysql.
serializerMust be set to mysql.
jdbc.drivercom.mysql.jdbc.DriverThe JDBC driver class to connect database.
jdbc.urljdbc:mysql://127.0.0.1:3306The url of database in JDBC format.
jdbc.usernamerootThe username to login database.
jdbc.password******The password corresponding to jdbc.username.
jdbc.ssl_modefalseThe SSL mode of connections with database.
jdbc.reconnect_interval3The interval(seconds) between reconnections when the database connection fails.
jdbc.reconnect_max_times3The reconnect times when the database connection fails.
jdbc.storage_engineInnoDBThe storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
jdbc.postgresql.connect_databasetemplate1The database used to connect when init store, drop store or check store exist.

PostgreSQL Backend Config Options

config optiondefault valuedescription
backendMust be set to postgresql.
serializerMust be set to postgresql.

Other options are consistent with the MySQL backend.

The driver and url of the PostgreSQL backend should be set to:

  • jdbc.driver=org.postgresql.Driver
  • jdbc.url=jdbc:postgresql://localhost:5432/