As a part of the overall feature and functionality updates to the Ascend Platform, we’ve released a number of improvements and features.
- Support for Byte, Short, Binary & Decimal Schema Types
- Add Kafka read connector.
- Add Postgres CDC connector.
- Flex-Code Connector persistence library to support stream processing and CDC workloads.
- Flex-Code Connectors – For unsupported data types, ingest as strings by default
- Flex-Code Connectors – Use JSON syntax when nested types (Map/Array/Struct) are configured to cast to String
- Flex-Code Connectors – On the Connection Details screen, show all associated Read Connectors
- Flex-Code Connectors – Add support for public S3 bucket connections
- Flex-Code Connectors – Add Private ssh gateway support for flex code connectors
- Flex-Code Connectors – Fetch “last_updated” and “record_count” metadata values from database list_objects call where available
- Flex-Code Connectors – Ensure Snowflake schema is filled in when interacting with the connection browser
- Flex-Code Connectors API – Call list_assets with configuration parameter instead of metadata
- Install Ascend SDK on PySpark docker image so it’s available in PySpark components
- Flex-Code Connectors – Boost connector performance and efficiency by scaling Connector resources based on partition size
- Flex-Code Connectors – Group Connection Types by categories
- Allow manual override of SQL Transform fingerprint – gives users greater control over when a SQL Transform reprocesses data.
- Flex-Code Connectors – CSV parser enhancements – support multiline values in csv parser, and use FAILFAST as the default parser mode to ensure we don’t skip corrupted rows.
- Flex-Code Connectors – Make sure we can propagate snowflake connection initialization exception to the UI.
- Queryable Dataflows – handle query state failures arising from bad or missing components
- Queryable Dataflows – Fix UI where queries appear to be stuck at “running” state
- Flex-Code Connectors – Reprocess Read Connector correctly when schema change is detected
- Fixed edge case where credential vault entries could enter an inconsistent state and be uneditable
- Fix Spark job heartbeat monitor for faster recovery of tasks with no live executors.
- Fix edge case resulting in Spark job executors being scheduled in the wrong zone, and hence unable to start.
- Fix errors for completely filtered inputs with new schema types by implementing the necessary dummy values.
If you need support, have questions, or would like to get started on the platform, follow the link below!