Replies: 3 comments
-
You have few options:
|
Beta Was this translation helpful? Give feedback.
-
The only production ready method is to use debezium/kafka and either pull them into Clickhouse using a Kafka engine, or push them via a kafka connector (all options available here). In both cases, you would need additional materialized view to parse debezium messages and eventually tables with |
Beta Was this translation helpful? Give feedback.
-
There is this project too now v2.0 and GA https://github.com/Altinity/clickhouse-sink-connector It handles most DDL (sink-connector-lightweight) |
Beta Was this translation helpful? Give feedback.
-
What is the most simple way to keep clickhouse always in sync with mysql where both are put in a single server? no cluster, no mirroring/replication.
Many tutorials use complicated technique to achieve that using debezium, kafka and other.
While it may necessary for large multicluster or distributed data store, our case is much more simple.
Currently we frequently do bulk insert, update, or delete data in mysql.
We need to do these operations in clickhouse as well, but the docs says that the update/delete operations are not intended for frequent use.
So the mysql tables are still needed, but how do we keep both in sync?
Thinking about truncating the clickhouse tables and re-insert all data from mysql each time a mutation is happened in mysql side.
But it doesn't sound a good idea when working with large data.
So I'm looking for advice here, I'd really appreciate that :)
Beta Was this translation helpful? Give feedback.
All reactions