That means that the test-topic we’re replicating from the origin cluster will be renamed to test-topic.replica in the destination cluster. will be substituted with the topic name from the origin cluster. So remote consuming is safer than remote producing. On the other hand, if the events were already consumed and Replicator can’t produce them due to network partition, there is always a risk that these events will accidentally get lost. If the consumer can’t connect, it simply won’t be able to read events - but the events are still stored in the origin Kafka cluster and can remain there for a long time. If there is a network partition and you lose connectivity between the data-centers, having a consumer that is unable to connect to a cluster is much safer than a producer that can’t connect. The reason for this is that long distance networks can be a bit less reliable than inside a data-center. So if you are sending data from NYC to SF, Replicator should run in SF and consume data across the US from NYC. If you are replicating events between different data centers (rather than between two Kafka clusters in the same data center), we recommend running the Connect Workers in the *destination* data center.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |