Skip to content

Bump snowflake-snowpark-python from 1.26.0 to 1.33.0 in /python-wrapper #185

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Jun 23, 2025

Bumps snowflake-snowpark-python from 1.26.0 to 1.33.0.

Release notes

Sourced from snowflake-snowpark-python's releases.

Release

1.33.0 (2025-06-19)

Snowpark Python API Updates

New Features

  • Added support for MySQL in DataFrameWriter.dbapi (PrPr) for both Parquet and UDTF-based ingestion.
  • Added support for PostgreSQL in DataFrameReader.dbapi (PrPr) for both Parquet and UDTF-based ingestion.
  • Added support for Databricks in DataFrameWriter.dbapi (PrPr) for UDTF-based ingestion.
  • Added support to DataFrameReader to enable use of PATTERN when reading files with INFER_SCHEMA enabled.
  • Added support for the following AI-powered functions in functions.py:
    • ai_complete
    • ai_similarity
    • ai_summarize_agg (originally summarize_agg)
    • different config options for ai_classify
  • Added support for more options when reading XML files with a row tag using rowTag option:
    • Added support for removing namespace prefixes from col names using ignoreNamespace option.
    • Added support for specifying the prefix for the attribute column in the result table using attributePrefix option.
    • Added support for excluding attributes from the XML element using excludeAttributes option.
    • Added support for specifying the column name for the value when there are attributes in an element that has no child elements using valueTag option.
    • Added support for specifying the value to treat as a null value using nullValue option.
    • Added support for specifying the character encoding of the XML file using charset option.
    • Added support for ignoring surrounding whitespace in the XML element using ignoreSurroundingWhitespace option.
  • Added support for parameter return_dataframe in Session.call, which can be used to set the return type of the functions to a DataFrame object.
  • Added a new argument to Dataframe.describe called strings_include_math_stats that triggers stddev and mean to be calculated for String columns.
  • Added support for retrieving Edge.properties when retrieving lineage from DGQL in DataFrame.lineage.trace.
  • Added a parameter table_exists to DataFrameWriter.save_as_table that allows specifying if a table already exists. This allows skipping a table lookup that can be expensive.

Bug Fixes

  • Fixed a bug in DataFrameReader.dbapi (PrPr) where the create_connection defined as local function was incompatible with multiprocessing.
  • Fixed a bug in DataFrameReader.dbapi (PrPr) where databricks TIMESTAMP type was converted to Snowflake TIMESTAMP_NTZ type which should be TIMESTAMP_LTZ type.
  • Fixed a bug in DataFrameReader.json where repeated reads with the same reader object would create incorrectly quoted columns.
  • Fixed a bug in DataFrame.to_pandas() that would drop column names when converting a dataframe that did not originate from a select statement.
  • Fixed a bug that DataFrame.create_or_replace_dynamic_table raises error when the dataframe contains a UDTF and SELECT * in UDTF not being parsed correctly.
  • Fixed a bug where casted columns could not be used in the values-clause of in functions.

Improvements

  • Improved the error message for Session.write_pandas() and Session.create_dataframe() when the input pandas DataFrame does not have a column.
  • Improved DataFrame.select when the arguments contain a table function with output columns that collide with columns of current dataframe. With the improvement, if user provides non-colliding columns in df.select("col1", "col2", table_func(...)) as string arguments, then the query generated by snowpark client will not raise ambiguous column error.
  • Improved DataFrameReader.dbapi (PrPr) to use in-memory Parquet-based ingestion for better performance and security.
  • Improved DataFrameReader.dbapi (PrPr) to use MATCH_BY_COLUMN_NAME=CASE_SENSITIVE in copy into table operation.

Snowpark Local Testing Updates

New Features

... (truncated)

Changelog

Sourced from snowflake-snowpark-python's changelog.

1.33.0 (YYYY-MM-DD)

Snowpark Python API Updates

New Features

  • Added support for MySQL in DataFrameWriter.dbapi (PrPr) for both Parquet and UDTF-based ingestion.
  • Added support for PostgreSQL in DataFrameReader.dbapi (PrPr) for both Parquet and UDTF-based ingestion.
  • Added support for Databricks in DataFrameWriter.dbapi (PrPr) for UDTF-based ingestion.
  • Added support to DataFrameReader to enable use of PATTERN when reading files with INFER_SCHEMA enabled.
  • Added support for the following AI-powered functions in functions.py:
    • ai_complete
    • ai_similarity
    • ai_summarize_agg (originally summarize_agg)
    • different config options for ai_classify
  • Added a ttl cache to describe queries. Repeated queries in a 15 second interval will use the cached value rather than requery Snowflake.

Bug Fixes

  • Fixed a bug in DataFrameReader.dbapi (PrPr) where the create_connection defined as local function was incompatible with multiprocessing.
  • Fixed a bug in DataFrameReader.dbapi (PrPr) where databricks TIMESTAMP type was converted to Snowflake TIMESTAMP_NTZ type which should be TIMESTAMP_LTZ type.
  • Fixed a bug in DataFrameReader.json where repeated reads with the same reader object would create incorrectly quoted columns.
  • Fixed a bug in DataFrame.to_pandas() that would drop column names when converting a dataframe that did not originate from a select statement.
  • Fixed a bug that DataFrame.create_or_replace_dynamic_table raises error when the dataframe contains a UDTF and SELECT * in UDTF not being parsed correctly.
  • Fixed a bug where casted columns could not be used in the values-clause of in functions.

Improvements

  • Added support for more options when reading XML files with a row tag using rowTag option:
    • Added support for removing namespace prefixes from col names using ignoreNamespace option.
    • Added support for specifying the prefix for the attribute column in the result table using attributePrefix option.
    • Added support for excluding attributes from the XML element using excludeAttributes option.
    • Added support for specifying the column name for the value when there are attributes in an element that has no child elements using valueTag option.
    • Added support for specifying the value to treat as a null value using nullValue option.
    • Added support for specifying the character encoding of the XML file using charset option.
    • Added support for ignoring surrounding whitespace in the XML element using ignoreSurroundingWhitespace option.
  • Added support for parameter return_dataframe in Session.call, which can be used to set the return type of the functions to a DataFrame object.
  • Added a new argument to Dataframe.describe called strings_include_math_stats that triggers stddev and mean to be calculated for String columns.
  • Added debuggability improvements to show a trace of most recent dataframe transformations if an operation leads to a SnowparkSQLException. Enable it using snowflake.snowpark.context.configure_development_features(). This feature also depends on AST collection to be enabled in the session which can be done using session.ast_enabled = True.
  • Improved the error message for Session.write_pandas() and Session.create_dataframe() when the input pandas DataFrame does not have a column.
  • Added support for retrieving Edge.properties when retrieving lineage from DGQL in DataFrame.lineage.trace.
  • Added a parameter table_exists to DataFrameWriter.save_as_table that allows specifying if a table already exists. This allows skipping a table lookup that can be expensive.
  • Improved DataFrame.select when the arguments contain a table function with output columns that collide with columns of current dataframe. With the improvement, if user provides non-colliding columns in df.select("col1", "col2", table_func(...)) as string arguments, then the query generated by snowpark client will not raise ambiguous column error.
  • Improved DataFrameReader.dbapi (PrPr) to use in-memory Parquet-based ingestion for better performance and security.
  • Improved DataFrameReader.dbapi (PrPr) to use MATCH_BY_COLUMN_NAME=CASE_SENSITIVE in copy into table operation.

Snowpark Local Testing Updates

New Features

... (truncated)

Commits
  • 40957d7 Update CHANGELOG.md release date 6/19/2025
  • dde46f5 Update CHANGELOG.md date
  • 7608971 Update changelog v1.33.0 (#3471)
  • 762dea8 NO-SNOW: Fix dependency setup issues in modin daily tests (#3467)
  • 8dc8902 SNOW-2149024: Fix KeyError bug in test_str___getitem___dict and test_str_slic...
  • 16f5bd9 fix test flakes
  • 4d90f72 snowml fix
  • 8bca0ad Disabling multiline queries by default (#3436)
  • d4d907f NO SNOW: release preparation for 1.33.0
  • d28e3f1 SNOW-2147180: Fix test_switch_operations.py::test_filtered_data on jenkins an...
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [snowflake-snowpark-python](https://github.com/snowflakedb/snowpark-python) from 1.26.0 to 1.33.0.
- [Release notes](https://github.com/snowflakedb/snowpark-python/releases)
- [Changelog](https://github.com/snowflakedb/snowpark-python/blob/main/CHANGELOG.md)
- [Commits](snowflakedb/snowpark-python@v1.26.0...v1.33.0)

---
updated-dependencies:
- dependency-name: snowflake-snowpark-python
  dependency-version: 1.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Jun 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants