While working on a pipeline like PostgresTableSource → Join → SqlToStream → Java map → collect, I faced a runtime ClassCastException. The issue is happening because after the SQL join, the data is actually in a tuple-like structure (like Tuple2<Record, Record>), but during the SqlToStream conversion, this structure is not preserved properly and everything is treated as a Record.
Because of this, when I try to use the data in the Java map step (for example accessing fields or casting types), it assumes a wrong structure and crashes at runtime (e.g., String cannot be cast to Integer). There is no validation or check between SQL and Java stages, so the mismatch is not caught early and only fails during execution.
Ideally, the correct schema/type should be preserved (or validated) when moving from SQL to Java, so that such issues don’t happen at runtime.
for example -
CREATE TABLE table_a (
id INT,
name TEXT
);
CREATE TABLE table_b (
id TEXT, -- different type
value TEXT
);

While working on a pipeline like PostgresTableSource → Join → SqlToStream → Java map → collect, I faced a runtime ClassCastException. The issue is happening because after the SQL join, the data is actually in a tuple-like structure (like Tuple2<Record, Record>), but during the SqlToStream conversion, this structure is not preserved properly and everything is treated as a Record.
Because of this, when I try to use the data in the Java map step (for example accessing fields or casting types), it assumes a wrong structure and crashes at runtime (e.g., String cannot be cast to Integer). There is no validation or check between SQL and Java stages, so the mismatch is not caught early and only fails during execution.
Ideally, the correct schema/type should be preserved (or validated) when moving from SQL to Java, so that such issues don’t happen at runtime.
for example -
CREATE TABLE table_a (
id INT,
name TEXT
);
CREATE TABLE table_b (
id TEXT, -- different type
value TEXT
);