You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I only care about one row in the result, and I have specified that I want to ignore extra rows, but this test fails with an exception stating that the number of rows does not match:
-- database: presto; groups: catalog,session_variables;
--!
set session distributed_join = false;
show session
--!
-- delimiter: |; trimValues: true; ignoreExcessRows: true;
distributed_join | false | true | boolean | Use a distributed join instead of a broadcast join |
This version works correctly:
-- database: presto; groups: catalog,session_variables;
--!
set session distributed_join = false;
show session
--!
-- delimiter: |; trimValues: true; ignoreExcessRows: true; ignoreOrder: true
distributed_join | false | true | boolean | Use a distributed join instead of a broadcast join |
Here's the stack trace for the failure:
INFO: [1 of 2] sql_tests.testcases.catalog.btr_0 (Groups: btr)
Mar 28, 2016 9:15:06 PM com.teradata.tempto.internal.listeners.ProgressLoggingListener onTestFailure
SEVERE: Exception:
java.lang.AssertionError: Expected row count to be <1>, but was <32>; rows=[[columnar_processing, false, false, boolean, Use columnar processing], [columnar_processing_dictionary, false, false, boolean, Use columnar processing with optimizations for dictionaries], [dictionary_aggregation, false, false, boolean, Enable optimization for aggregations on dictionaries], [distributed_index_join, false, false, boolean, Distribute index joins on join keys instead of executing inline], [distributed_join, true, true, boolean, Use a distributed join instead of a broadcast join], [execution_policy, all-at-once, all-at-once, varchar, Policy used for scheduling query tasks], [hash_partition_count, 8, 8, bigint, Number of partitions for distributed joins and aggregations], [initial_splits_per_node, 4, 4, bigint, The number of splits each node will run per task, initially], [optimize_hash_generation, true, true, boolean, Compute hash codes for distribution, joins, and aggregations early in query plan], [parse_decimal_literals_as_double, false, false, boolean, Parse decimal literals as DOUBLE instead of DECIMAL], [plan_with_table_node_partitioning, true, true, boolean, Experimental: Adapt plan to pre-partitioned tables], [prefer_streaming_operators, false, false, boolean, Prefer source table layouts that produce streaming operators], [push_table_write_through_union, true, true, boolean, Parallelize writes when using UNION ALL in queries that write data], [query_max_run_time, 100.00d, 100.00d, varchar, Maximum run time of a query], [re2j_dfa_retries, 5, 5, bigint, Set a number of DFA retries before switching to NFA], [re2j_dfa_states_limit, 2147483647, 2147483647, bigint, Set a DFA states limit], [redistribute_writes, true, true, boolean, Force parallel distributed writes], [regex_library, JONI, JONI, varchar, Select the regex library], [resource_overcommit, false, false, boolean, Use resources which are not guaranteed to be available to the query], [split_concurrency_adjustment_interval, 100.00ms, 100.00ms, varchar, Experimental: Interval between changes to the number of concurrent splits per node], [task_aggregation_concurrency, 1, 1, bigint, Experimental: Default number of local parallel aggregation jobs per worker], [task_hash_build_concurrency, 1, 1, bigint, Experimental: Default number of local parallel hash build jobs per worker], [task_intermediate_aggregation, false, false, boolean, Experimental: add intermediate aggregation jobs per worker], [task_join_concurrency, 1, 1, bigint, Experimental: Default number of local parallel join jobs per worker], [task_share_index_loading, false, false, boolean, Share index join lookups and caching within a task], [task_writer_count, 1, 1, bigint, Default number of local parallel table writer jobs per worker], [hive.force_local_scheduling, false, false, boolean, Only schedule splits on workers colocated with data node], [hive.orc_max_buffer_size, 8MB, 8MB, varchar, ORC: Maximum size of a single read], [hive.orc_max_merge_distance, 1MB, 1MB, varchar, ORC: Maximum size of gap between two reads to merge into a single read], [hive.orc_stream_buffer_size, 8MB, 8MB, varchar, ORC: Size of buffer for streaming reads], [hive.parquet_optimized_reader_enabled, false, false, boolean, Experimental: Parquet: Enable optimized reader], [hive.parquet_predicate_pushdown_enabled, false, false, boolean, Experimental: Parquet: Enable predicate pushdown for Parquet]]
at org.assertj.core.api.AbstractAssert.failWithMessage(AbstractAssert.java:114)
at com.teradata.tempto.assertions.QueryAssert.hasRowsCount(QueryAssert.java:113)
at com.teradata.tempto.assertions.QueryAssert.containsExactly(QueryAssert.java:225)
at com.teradata.tempto.assertions.QueryAssert.matches(QueryAssert.java:100)
at com.teradata.tempto.internal.convention.sql.SqlQueryConventionBasedTest.test(SqlQueryConventionBasedTest.java:96)
at com.teradata.tempto.internal.convention.ConventionBasedTestProxyGenerator$ConventionBasedTestProxy.test(ConventionBasedTestProxyGenerator.java:120)
at com.teradata.tempto.catalog.btr_0(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:639)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:821)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1131)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:124)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108)
at org.testng.TestRunner.privateRun(TestRunner.java:773)
at org.testng.TestRunner.run(TestRunner.java:623)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:357)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:352)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:310)
at org.testng.SuiteRunner.run(SuiteRunner.java:259)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1185)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1110)
at org.testng.TestNG.run(TestNG.java:1018)
at com.teradata.tempto.runner.TemptoRunner.run(TemptoRunner.java:88)
at com.teradata.tempto.runner.TemptoRunner.runTempto(TemptoRunner.java:65)
at com.teradata.tempto.runner.TemptoRunner.runTempto(TemptoRunner.java:53)
at com.facebook.presto.tests.TemptoProductTestRunner.main(TemptoProductTestRunner.java:33)
The text was updated successfully, but these errors were encountered:
I only care about one row in the result, and I have specified that I want to ignore extra rows, but this test fails with an exception stating that the number of rows does not match:
This version works correctly:
Here's the stack trace for the failure:
The text was updated successfully, but these errors were encountered: