Kafka stream reduce state event size in aggregate
I am using kafka streams version 3.5.1 in scala and I want to reduce the amount of data saved in aggregate function.
I have an event of around 200 fields. What I want to achieve in the aggregate is compute the difference in 5 different fields (some of them are numbers and need to check the difference and in other fields just if they changed) but at the end I need to return all the fields to the next topic. With the aggregate function afaik if I want to have in an stream from the ktable all the fields I would need also to save all the fields.
Is there an easy way to have a 200fields event, only save in the ktable the 5 relevant fields and then return the full original event with some extra fields? Dont know if it matters but I´m grouping by key.
Read more here: Source link