Introduction
RaimaDB, an embedded database system from Raima Inc., is designed for high-performance applications across diverse platforms, including desktops, servers, embedded systems, and microcontrollers. It handles real-time data management in scenarios such as IoT devices, industrial automation, and edge computing. RaimaDB supports replication and hot backup mechanisms, which provide data redundancy, disaster recovery, and high availability without interrupting operations.
Replication in RaimaDB propagates data changes from a source database to one or more target databases, maintaining consistency across distributed systems. Hot backup creates live copies of the database while it remains operational, reducing downtime during maintenance or failover events. These features are based on RaimaDB’s storage architecture, which focuses on efficiency, cross-platform compatibility, and minimal I/O overhead.
This article covers RaimaDB’s asynchronous replication, the technologies enabling these features, and implementation approaches. It demonstrates setups using Docker containers, and details replication strategies using tools like rdm-replicate, rdm-hot-backup, and rdm-rsync. For further reading, refer to Raima’s documentation on updates by copy-on-write, snapshots, vacuuming, and replication and hot backup.
Understanding Asynchronous Replication in RaimaDB
RaimaDB supports asynchronous replication, where changes to the source database are sent to target replicas without requiring immediate synchronization. The source applies changes to its pack files, and replication or hot backup transfers extensions to pack files to the target. This approach improves performance by not blocking on remote acknowledgments, suitable for high-latency or intermittently connected environments.
Asynchronous replication prioritizes availability and throughput over strict real-time consistency between databases. Targets may lag behind the source, leading to eventual consistency, but RaimaDB’s transactional integrity ensures updates are applied atomically to avoid partial states. Benefits include lower latency for source operations, fault tolerance (source continues if targets are unavailable), and scalability for read-heavy workloads where targets offload queries.
Users must handle potential conflicts, such as divergent updates on targets, through application logic. Asynchronous mode fits backup, reporting, or geographically distributed systems, unlike synchronous replication where commits block until replicas confirm.
Foundational Technologies in RaimaDB
RaimaDB’s replication and hot backup depend on a storage engine that maintains data integrity, efficiency, and compatibility. Key elements include copy-on-write (COW) updates, pack files, vacuuming, and platform-agnostic data encoding.
Copy-On-Write (COW) Updates
RaimaDB uses a full recursive COW mechanism for database modifications. When data is updated, RaimaDB creates new copies of affected structures (e.g., B-trees for keyed data and R-trees for spatial data) instead of changing originals. This proceeds bottom-up: new items are written first, followed by parent nodes up to the main B-tree root.
COW allows snapshot isolation without read locks, supporting concurrent readers and writers. It avoids in-place updates, removing free-space management overhead and partial page writes. Benefits include:
- – Space Efficiency: COW enables trivial support for variable-length records, whereas in-place updates make variable-length records non-trivial to implement without fragmentation.
- – Performance Gains: Parallel transactions are supported on independent tables.
- – Rapid Recovery: There is no difference between a crash or normal shutdown. When reopening the database, the task is to locate the last commit mark. Recovery markers every 64 KiB reduce the amount of scanning needed. A crash may lose only the last in-progress transactions, as the append-only nature ensures only content at the end of files is affected.
- – Durability Flexibility: Sync operations apply only to the latest pack file, lowering I/O.
Snapshots, part of COW, reference the current B-tree root for isolated views, enabling multiversion concurrency control (MVCC) without extra structures. This supports hot backups by permitting consistent point-in-time copies during live operations.
Pack Files and Vacuuming
RaimaDB structures data into pack files, append-only files with a maximum size slightly under 2 GiB. Updates append to the end of the current pack file, referencing prior files without mid-file changes. This ensures sequential I/O, improving performance on SSDs and mechanical drives.
Stale data builds up over time, managed by vacuuming—a garbage collection process that traverses structures, moves active content to the latest pack file, and updates references. Vacuuming processes the oldest pack file first: if unreferenced, it is deleted; otherwise, data is clustered by key for better access. It runs without write locks on readers but briefly pauses updates to the vacuumed table.
Benefits of vacuuming include:
- – Storage Optimization: Lowers disk footprint by consolidating and deleting obsolete files.
- – Device Longevity: Reduces SSD wear through efficient garbage collection.
- – I/O Efficiency: Clustered data boosts cache hit rates and sequential reads.
Pack file limits need consideration for replication: targets on systems with smaller file size constraints may fail if source packs exceed 2 GiB, but the reverse poses no issue.
Low-Level Data Formats for Cross-Platform Compatibility
RaimaDB’s on-disk format supports replication across platforms like Windows, macOS, Linux, RTOS-based embedded systems, and bare-metal microcontrollers. Data types are encoded in a platform-agnostic way:
- – Integers: Variable-length encoding for endian-independent storage; fixed-size cases use network byte order (big-endian).
- – Floating-Point Numbers: IEEE-754 format in network byte order, except on microcontrollers with limited FPU support, where custom handling applies.
- – Strings: UTF-8 encoding internally and on-disk; Windows CP1252 is API-optional but converted to UTF-8 for storage.
- – Binary Data: Byte-aligned units, with no endian or alignment issues.
- – Encryption: Uniform algorithms across platforms ensure decryptable replicas.
Locale-dependent string sorting (collation) requires care: indexes on collated strings risk corruption across platform versions or locale updates. Avoid collated indexes for replicated databases to prevent inconsistencies.
As long as IEEE-754 is supported and there are no locale issues, the target can handle receiving data, producing byte-for-byte identical replicas.
Setting Up with Docker
To demonstrate replication and hot backup, use Docker for reproducible environments. Raima provides an official image on Docker Hub: raima/rdm. Pull and run it as follows:
$ docker pull raimadb/rdm
Using default tag: latest
latest: Pulling from raimadb/rdm
Digest: sha256:343e56b54ecdf0214b80cbaf57fa72edfc7f8be28b5ed5c236754ff5296645c5
Status: Image is up to date for raimadb/rdm:latest
docker.io/raimadb/rdm:latest
$ docker run --hostname rdm -it raimadb/rdm
Welcome to the RDM License Request and Shell script!
Please fill out the following form to request a license for running RDM
in a container. If you have filled out this form within the past 90 days,
you can skip this by hitting enter when asked for the full name.
Enter your Full Name (leave blank to skip): John Doe
…
Form submitted successfully! You have a license for 90 days.
Please note that the RDM product may have a time-bomb that will expire
after a certain date. Please also note that this is not the same as a license
expiration. Make sure you have registered for a license as previously
instructed.
Checking whether a time-bomb has been triggered by executing rdm-compile...
The time-bomb has not been triggered!
This product expire on: 2026-05-05
Point your browser at:
http://172.17.0.2
to view the documentation
Dropping you into a bash shell...
ubuntu@rdm:~/GettingStarted$ cd c-core/core20Example/
ubuntu@rdm:~/GettingStarted/c-core/core20Example$ mkdir build
ubuntu@rdm:~/GettingStarted/c-core/core20Example$ cd build
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$ cmake ..
-- The C compiler identification is GNU 13.3.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Found RDM in: /opt/Raima/rdm_enterprise_enc_simple-16.1.0.808/lib/cmake/RDM
-- Configuring done (0.2s)
-- Generating done (0.0s)
-- Build files have been written to: /home/ubuntu/GettingStarted/c-core/core20Example/build
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$ make
[ 20%] Compile and embed schema for core20.sdl for C struct definition API
[ 40%] Building C object CMakeFiles/core20Example.dir/core20Example_main.c.o
[ 60%] Building C object CMakeFiles/core20Example.dir/example_fcns.c.o
[ 80%] Building C object CMakeFiles/core20Example.dir/core20_cat.c.o
[100%] Linking C executable core20Example
[100%] Built target core20Example
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$ ./core20Example -h
Usage:
core20Example [OPTION]...
Example that demonstrates server-side, client-side, and embedded TFS continuously inserting and/or reading rows
Short options can be combined into one string starting with a single '-'.
Mandatory arguments to long options are mandatory for short options too.
Long option arguments can also be specified in a separate argument.
-h, --help Display this usage information
-n, --no-fln-output Don't print file and line numbers for output messages
-s, --server Use the server-side TFS
-c, --client Use the client-side TFS
-r, --reader-only Read rows
-w, --writer-only Write rows
-m, --sleep-ms=MSECS
Sleep MSECS milliseconds between iterations
-i, --iterations=COUNT Run COUNT iterations
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$ ./core20Example --iterations=20 --sleep-millisecs 10
Read: 19 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:134
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$
When running, the container prompts for license details (skippable if recent). It shows a URL (e.g., http://172.17.0.2) for browser-based documentation access. Each example, like core20Example, has dedicated pages. The container allows VS Code attachment for development, detailed in the main documentation (not covered here).
The core20Example is a simple application showing continuous row insertion and reading via RaimaDB’s Transactional File Server (TFS). It is agnostic to data organization, ideal for illustrating replication and hot backup without complex application logic.
For multi-container scenarios, build a custom image extending raimadb/rdm. This includes pre-built tools (rdm-hot-backup, core20Example, rdm-rsync) and SSH/rsync for inter-container communication.
Custom Dockerfile Explanation
The Dockerfile creates a hybrid build-and-run environment for demonstration of the approaches that follows next:
FROM raimadb/rdm
# Build rdm-hot-backup as ubuntu (configure and build steps, without preset)
RUN cd /home/ubuntu/GettingStarted/rdm-hot-backup && \
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && \
cmake --build build && \
cd /home/ubuntu/GettingStarted/c-core/core20Example && \
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && \
cmake --build build
USER root
# Install required packages
RUN apt-get update && \
apt-get install -y openssh-server rsync inotify-tools
# Set up sudo for ubuntu user to start sshd and generate keys without password
RUN echo "ubuntu ALL=(ALL) NOPASSWD: /usr/sbin/sshd, /usr/bin/ssh-keygen" > /etc/sudoers.d/ubuntu-sshd && \
chmod 0440 /etc/sudoers.d/ubuntu-sshd
# Set up SSH server runtime directory
RUN mkdir /var/run/sshd
# Create private/public key pair for ubuntu user
RUN mkdir -p /home/ubuntu/.ssh && \
ssh-keygen -t rsa -b 2048 -f /home/ubuntu/.ssh/id_rsa -N "" && \
cat /home/ubuntu/.ssh/id_rsa.pub >> /home/ubuntu/.ssh/authorized_keys && \
chmod 700 /home/ubuntu/.ssh && \
chmod 600 /home/ubuntu/.ssh/authorized_keys && \
chmod 600 /home/ubuntu/.ssh/id_rsa && \
chown -R ubuntu:ubuntu /home/ubuntu/.ssh
# Configure SSH client for passwordless, promptless connections (disable host key checking)
RUN echo "Host *" > /home/ubuntu/.ssh/config && \
echo " StrictHostKeyChecking no" >> /home/ubuntu/.ssh/config && \
echo " UserKnownHostsFile /dev/null" >> /home/ubuntu/.ssh/config && \
echo " LogLevel ERROR" >> /home/ubuntu/.ssh/config && \
chown ubuntu:ubuntu /home/ubuntu/.ssh/config && \
chmod 600 /home/ubuntu/.ssh/config
# Copy rdm-rsync, rdm-hot-backup, and core20Example binaries to Raima installation
RUN RDM_DIR=$(ls /opt/Raima | grep -E -- '-[1-9][0-9]\.[0-9]$') && \
cp /home/ubuntu/GettingStarted/rdm-rsync/rdm-rsync /opt/Raima/"$RDM_DIR"/bin/rdm-rsync && \
chmod +x /opt/Raima/"$RDM_DIR"/bin/rdm-rsync && \
cp /home/ubuntu/GettingStarted/rdm-hot-backup/build/rdm-hot-backup /opt/Raima/"$RDM_DIR"/bin/. && \
cp /home/ubuntu/GettingStarted/c-core/core20Example/build/core20Example /opt/Raima/"$RDM_DIR"/bin/.
# Create startup script to generate host keys, start sshd (with sudo), and then the original command from the correct working directory
RUN echo '#!/bin/bash' > /usr/local/bin/start_sshd_and_shell.sh && \
echo 'sudo ssh-keygen -A' >> /usr/local/bin/start_sshd_and_shell.sh && \
echo 'sudo /usr/sbin/sshd -D &' >> /usr/local/bin/start_sshd_and_shell.sh && \
echo 'cd /home/ubuntu/GettingStarted' >> /usr/local/bin/start_sshd_and_shell.sh && \
echo '/usr/local/bin/request_license_and_shell.pl' >> /usr/local/bin/start_sshd_and_shell.sh
RUN chmod +x /usr/local/bin/start_sshd_and_shell.sh
# Switch back to ubuntu user and set working directory to match base image
USER ubuntu
WORKDIR /home/ubuntu/GettingStarted
# Set the new CMD
CMD ["/usr/local/bin/start_sshd_and_shell.sh"]
Explanation:
- – Base Image: Extends raima/rdm with pre-built examples (rdm-hot-backup, core20Example).
- – Dependencies: Installs openssh-server, rsync, and inotify-tools for secure inter-container data transfer and file monitoring.
- – SSH Setup: Configures passwordless SSH with key generation, sudo privileges for sshd, and strict host checking disabled for demo simplicity.
- – Binary Placement: Copies tools to RaimaDB’s bin directory for PATH access.
- – Startup Script: Generates host keys, starts SSH daemon in background, and invokes the license/shell script.
- User and Workdir: Reverts to ubuntu user and sets working directory for consistency.
Build the image:
$ docker pull raimadb/rdm
…
$ docker build -t my-rdm my-rdm
…
Note: This image combines build and runtime for educational purposes. In production, separate build stages from runtime to minimize image size and security risks, especially for custom applications.
Approaches to Replication and Hot Backup
All approaches rely on the Docker image built above and produce byte-identical target replicas. Some allow live reads on targets during replication (hot replicas), while others create offline backups for failover. Linux is fully supported; examples use the custom my-rdm image.
Approach 1: Low-Level RaimaDB API
The base method could be made usinguses RaimaDB’s replication API to extract changes from a source and apply them to a target. This API can handle data serialization and transport, but applications can manage transport themselves, typically for protocols not supported by RaimaDB. It fits embedded systems without standard networking or custom integrations.
For convenience, Raima provides rdm-replicate, a command-line wrapper around this API, included in enterprise packages. It manages TFS connections for source and target, enabling cross-process/machine replication without custom code. Use cases include initial seeding or ongoing replication where read access to the target is required. Details are skipped here; see subsequent approaches for practical usage.
Approach 2: Using rdm-replicate Command-Line Tool
rdm-replicate enables asynchronous replication via TFS. Demonstrate with core20Example in two containers.
The following code with light background is for the source container and dark background for the target container.
Start source container:
$ docker run --hostname source --name source --rm -it my-rdm
Welcome to the RDM License Request and Shell script!
…
http://172.17.0.2
…
ubuntu@source:~/GettingStarted$ cd ..
ubuntu@source:~$ core20Example --iterations 1
Read: 0 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:148
ubuntu@source:~$
Start target container (separate terminal):
$ docker run --hostname target --name target --rm -it my-rdm
Welcome to the RDM License Request and Shell script!
…
http://172.17.0.3
…
ubuntu@target:~/GettingStarted$ cd ..
In source:
ubuntu@source:~$ rdm-rsync --copy --force core20 172.17.0.3:core20
Master SSH connection is down. Attempting to restart...
Master restarted successfully.
sending incremental file list
./
p00000001.pack
sent 820 bytes received 38 bytes 1,716.00 bytes/sec
total size is 1,440 speedup is 1.68
ubuntu@source:~$ core20Example --server --writer-only
Inserted: 27 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:212
In target:
ubuntu@target:~$ core20Example --server --reader-only
Read: 0 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:148
In source (third terminal)
$ docker exec -it source bash
ubuntu@source:~/GettingStarted$ rdm-replicate tfs://localhost/core20 tfs://172.17.0.3/core20
Database Replicate Utility
RaimaDB 16.1.0 Build 812 [11-7-2025] https://www.raima.com/
Copyright (c) 2024 Raima Inc., All rights reserved.
*** EVALUATION COPY ONLY (not for release) Contact sales@raima.com. ***
Updates from source propagate to target, observable in application output. Targets support live reads.
Variation: Standalone TFS
Replace built-in TFS with standalone rdm-tfs for both containers. Start rdm-tfs before core20Example –client. The –client option must be used instead of –server; this makes core20Example connect to the TFS rather than use a built-in TFS (which can also accept remote connections). Using a built-in TFS eliminates the need for transport between core20Example and the TFS. However, with a built-in TFS, the database is unavailable to other clients (such as rdm-replicate) when the example is not running. This creates a trade-off between performance and uptime.
No initial rdm-rsync is needed if TFS initializes empty databases. Replication order: start TFS on source and target, start writer on source, run rdm-replicate, then start reader on target.
Approach 3: Using rdm-hot-backup for Hot Backup
rdm-hot-backup supports live backups by monitoring file system events (via Linux inotify) and mirroring changes to a target directory. It first copies the source, then applies appends, creations, and deletions in pack file order.
With this approach, you may need to change some system settings to get inotify to work properly, especially if other software (e.g., VS Code with large projects) is consuming resources. Increase limits as follows on the host where Docker is being run:
$ sysctl fs.inotify.max_user_watches fs.inotify.max_user_instances
fs.inotify.max_user_watches = 65536
fs.inotify.max_user_instances = 128
$ sudo sysctl -w fs.inotify.max_user_watches=524288
fs.inotify.max_user_watches = 524288
$ sudo sysctl -w fs.inotify.max_user_instances=1024
fs.inotify.max_user_instances = 1024
$ echo "fs.inotify.max_user_watches=524288" | sudo tee -a /etc/sysctl.conf
fs.inotify.max_user_watches=524288
$ echo "fs.inotify.max_user_instances=1024" | sudo tee -a /etc/sysctl.conf
fs.inotify.max_user_instances=1024
$ sudo sysctl -p
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 1024
Start a single container and first issue these commands:
$ docker run --hostname single --name single --rm -it my-rdm
...
http://172.17.0.2
...
ubuntu@single:~/GettingStarted$ cd ..
ubuntu@single:~$ core20Example --writer-only
Inserted: 133 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:212
In another terminal:
$ docker exec -it single bash
ubuntu@single:~/GettingStarted$ cd ..
ubuntu@single:~$ rdm-hot-backup core20 core20-copy
^C
ubuntu@single:~$ ls -l core20.rdm core20-copy.rdm
core20-copy.rdm:
total 36
-rw-r--r-- 1 ubuntu ubuntu 33536 Nov 7 17:31 p00000001.pack
core20.rdm:
total 36
-rw-r--r-- 1 ubuntu ubuntu 33936 Nov 7 17:31 p00000001.pack
ubuntu@single:~$ rdm-hot-backup core20 core20-copy
^C
ubuntu@single:~$ ls -l core20.rdm core20-copy.rdm
core20-copy.rdm:
total 40
-rw-r--r-- 1 ubuntu ubuntu 40384 Nov 7 17:32 p00000001.pack
core20.rdm:
total 44
-rw-r--r-- 1 ubuntu ubuntu 40784 Nov 7 17:32 p00000001.pack
The above shows that the hot backup can be interrupted at any time, and progress can be observed with the ls command line tool.
Requirements:
- – Source on local filesystem (no NFS).
- – Tool must match RaimaDB’s pace to avoid inconsistencies.
- – Pack files copied ascending; appends mirrored exactly.
- – Sufficient kernel memory for inotify watches (see above limits).
Advantages: Simple, efficient (blocks on idle), low engine interference. Disadvantages: Target unusable during backup. Transactions that have not been completely copied will be ignored.
Integrates with COW by respecting pack file finalization. Resumable on restart with pack overlap.
Approach 4: Custom File System Monitoring or rsync-Based Solutions
For non-Linux or advanced needs, develop custom tools using OS-specific APIs (e.g., Windows ReadDirectoryChangesW). Follow pack file rules: ascending copies, synchronized appends/creations.
Alternatively, use rdm-rsync (included in the custom image) for initial/seeded copies, combined with periodic snapshots. This suits offline backups or hybrid setups, ensuring byte-identical targets for failover. Not live-readable during sync but efficient for periodic replication.
rdm-rsync is a script that uses rsync with file watches (via inotify) to monitor changes and perform incremental syncs.
Example:
Start source container:
$ docker run --hostname source --name source --rm -it my-rdm
...
http://172.17.0.2
...
ubuntu@source:~/GettingStarted$ cd ..
ubuntu@source:~$ core20Example --writer-only
Inserted: 73 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c
In a second terminal:
$ docker run --hostname target --name target --rm -it my-rdm
...
http://172.17.0.3
...
ubuntu@target:~/GettingStarted$ cd ..
ubuntu@target:~$ ls -l core20.rdm
ls: cannot access 'core20.rdm': No such file or directory
ubuntu@target:~$
Since no copying has started, the ls command shows an error. Repeat later to observe content being copied.
In a third terminal, exec into source:
$ docker exec -it source bash
ubuntu@source:~/GettingStarted$ cd ..
ubuntu@source:~$ rdm-rsync core20 172.17.0.3:core20
Master SSH connection is down. Attempting to restart...
Master restarted successfully.
Setting up watches.
Watches established.
sending incremental file list
./
p00000001.pack
sent 2,994 bytes received 38 bytes 6,064.00 bytes/sec
total size is 17,344 speedup is 5.72
sending incremental file list
p00000001.pack
sent 311 bytes received 35 bytes 692.00 bytes/sec
total size is 17,760 speedup is 51.33
sending incremental file list
p00000001.pack
sent 318 bytes received 35 bytes 706.00 bytes/sec
total size is 18,224 speedup is 51.63
sending incremental file list
p00000001.pack
...
In summary, RaimaDB’s approaches address diverse requirements, from API-driven custom integrations to turnkey tools, all using its core architecture for reliable, cross-platform data management.