-
Notifications
You must be signed in to change notification settings - Fork 850
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Cloudevents] Switch to Debezium 2.0 #292
base: main
Are you sure you want to change the base?
Conversation
39c10fc
to
62f1754
Compare
7b05aee
to
9c52c91
Compare
Fixed also https://issues.redhat.com/browse/DBZ-5720 |
9c52c91
to
5d97f6b
Compare
@@ -65,7 +65,7 @@ curl -i -X PUT -H "Accept:application/json" -H "Content-Type:application/json" | |||
docker run --rm --tty \ | |||
--network cloudevents-network \ | |||
quay.io/debezium/tooling:1.2 \ | |||
kafkacat -b kafka:9092 -C -o beginning -q -s value=avro -r http://schema-registry:8081 \ | |||
kafkacat -b kafka:9092 -C -o beginning -q \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vjuranek Should not be README.md descriptions also updated because I suppose they are not aligned with the new kafkacat
output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's not aligned, but no idea what I should use a replacement. Removing schema option make the command to succeed (which schema it fails), output is not nice, but still good enough for user to verify data was written there. Output look like this:
$ docker run --rm --tty --network cloudevents-network quay.io/debezium/tooling:1.2 kafkacat -b kafka:9092 -C -o beginning -q -t customers3
�
Sally
Thomas*[email protected]
�
George
[email protected]
�
Edward
Walker▒[email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vjuranek Are the schemas different? If yes can we replace the samples by quering the schemas from apicurio registry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jpechane I'm not sure I follow. Which schemas? There is only one schema for all events. Mayeb there is misunderstanding as I forgot to remove schema from kafkacat -b kafka:9092 -C -o beginning -q -s value=avro -r http://schema-registry:8080
in previous example. This is a mistake - it doesn't work either. Will remove it soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vjuranek ok, maybe it would make sense to rebuild the example a bit more. Apicurio provides the JSON converter that does not embed schema in message but stores it in registry in the same way as Avro does. How about changing the exmaple to use it instead of Avro? In that case it would be nicely presentable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jpechane unfortunately this is not possible right now, as CloudEventsConverter
works with schema registry only when avro is used and there are also other issues. I spent quite some time with it today with no reasonable results, so I'll try to prepare some reproducer and will raise an issue with Apicurio.
For the record, I found out when one wants to use confluent based tools, apis/ccompat/v6
should be used, i.e. the command should be something like this:
docker run --rm --tty --network cloudevents-network quay.io/debezium/tooling:1.2 kafkacat -b kafka:9092 -C -o beginning -s value=avro -r http://schema-registry:8080/apis/ccompat/v6 -t dbserver3.inventory.customers
With this, it still fails, but give at least somehow reasonable error:
% ERROR: Failed to format message in dbserver3.inventory.customers [0] at offset 0: Avro/Schema-registry message deserialization: REST request failed (code 404): {"message":"No content with id/hash 'contentId-0' was found.","error_code":40403}: terminating
So far no idea why kafkcat
requests schema with ID 0
(which is not present, but starting ID 1
it is present), where it comes from and how to fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, let's wait with this PR when the issue is resolved. BTW, I'd prefer not use ccompat
API just to have different tooling used.WRT the schema there is apicurio.registry.id-handler
that should be set to the io.apicurio.registry.serde.Legacy4ByteIdHandler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using apicurio.registry.id-handler
doesn't seem to help, it just given me another exception. I also found out the issue mention above is already reported as apicurio-registry #2878
5d97f6b
to
fdfb815
Compare
No description provided.