-
Notifications
You must be signed in to change notification settings - Fork 499
PKG-870 Packaging tasks for release - PS 8.0.43-34 #5692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
+98,494
−65,018
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Approved by: Erlend Dahl <erlend.dahl@oracle.com>
SYMPTOM: - The engine allocates higher than configured amount of memory while creating secondary index on a varchar column. The memory allocated increases with higher values of innodb_ddl_buffer_size variable. BACKGROUND: - While creating the index, parallel threads scan the rows in table and store the index records in the key_sort_buffer->m_heap. - The fields of the index records are stored in the Key_sort_buffer->m_heap. The address of each index records is tracked in the metadata tuple Key_sort_buffer->m_dtuples` that uses standard memory allocation. ROOT CAUSE: - We approximate the number of tuples which can fit in the size using Buffer_size_avaliable_for_thread = innodb_ddl_buffer_size/(Number of parallel scan threads) Number of tuples which can fit = Buffer_size_avaliable_for_thread/std::min(1,minimum_record_size) - The minimum size of non-fixed size column types is determined as zero in the method dtype_get_min_size_low(), varchar is one of such column types. Since both the primary and secondary index keys are defined on varchar columns, the minimum record size obtained for the secondary index is 0. - This causes a high number of tuples to be estimated from the formula above. - Since we reserve the memory in the tuple container to hold the estimated number of tuples, this caused the memory allocation spike in the container as we estimated more number of rows than required. For example, ~1 GB for 128 MB innodb_ddl_buffer_size. FIX: - Instead of reserving the memory upfront for the metadata tuple vector, we could let the container manage its capacity. We shall avoid over-estimation of tuples for the situation like the one reported in this in bug. Instead, we now account approximate capacity of tuples container at the time of inserting the record metadata in the tuple. This may reduce the records packed in the key_sort_buffer, that may reduce the sorted chunks in the temp merge file. Thus it may increase the IO overhead a little. - Increase the IO thresholds in the test innodb.alter_table_temp_files_io.test by approximately 15% to account for the minor increase in IO observed. Change-Id: I97d19bed15b9accdb9269dfef28ce662c4bd6a6f
Issue ====== Bug#37105430 was fixed in trunk, this patch revealed some failures in trunk. Some of these issues are still hidden in 8.0 and 8.4 Fix ====== Backport the fix for Bug#37105430 into 8.0 and 8.4 Change-Id: I6b91a7bd0c614761c7fad2e43d10b258327f0033 Approved by : Pavithra Pandith <pavithra.pandith@oracle.com>
The command_services suite uses the sql server sessions as an underlying mechanism. This was never supposed to work recursively: i.e. a command using sessions executed in a session should fail. Added a check to the command service to ensure it's not usable in such a context. Fixed some error handling in the test UDF. Change-Id: Idb8e4793a3363a1aed6c6b8789445dec8ac462fd
Symptom: The `page_track.read_only` MTR test case has been failing on Windows on `mysqltest: At line 63: Command "force-rmdir" failed with error 1. my_errno=41.` Root cause: The test seems to be removing a directory with files being used by the server, which leads to error deleting these on Windows. The test seems to need to delete it before the server is started again, and to make things easier it is done before doing restart. This does not work on Windows. Fix: We are first stopping the server, and only then delete the files, before starting the server again. Additionally the `restart_innodb_read_only.inc` was rewritten using existing scripts to reduce code duplication. Change-Id: If249119b77e3dffc1ffa10faab285d8bd2c057a9
…[8.0 version] We hit an empty pointer in Query_block::nest_derived used by Query_block::transform_scalar_subqueries_to_join_with_derived. [ 8.0 version: CUBE not supported. 8.4 version: We implicitly try to do a prepare for secondary engine in the original repro due to GROUP BY CUBE(x), since CUBE is not supported by the primary engine. In the simplified repro, we explicitly enable the transform to see the error. 9.3 version: repro gives error message due to outer reference in window's ORDER BY clause. If that is removed, the error doesn't occur. ] At one stage we try to transform a query with a subquery shown as the following by the optimizer trace: /* select#2 */ select `t1`.`i` from <constant table> join `cte2` on (/* select#5 */ select 3 from `cte2`) <> 0 i.e. we have a join condition, but only one table (cte2) in the from list. This is the result of an earlier transformation. However, Query_block::nest_derived is not prepare for this scenario, presupposing the there be at least two tables when a join condition is involved: accessing Table_ref::embedding only works if we have an embedding, i.e. an inner table, but here we have only a single table. Solution: test for this situation, which allows the transform to succeed. Note: in 8.4 the subquery can't be offloaded to secondary engine anyway, since there are other subqueries that are not eliminated, but will work for primary engine for the simplified repro (no CUBE) with transform enabled. Change-Id: If83e7a8e72372a1d3da30b9155ede594352a2845
…uery transform [ back-port ] [ Back-ported to enable backport fix of Bug#36314993/Bug#36464947 ] Another case of transforming subqueries to a JOIN with derived tables, with the containing query being grouped. We create an extra derived table in which to do the grouping. This process moves initial select list items from the containing query into the extra derived table, replacing all original select list items (with the exception of subqueries which get their own derived tables) with columns from the extra derived table. This logic didn't handle the ANY_VALUE function correctly: As for other functions, it was left in the outer where it doesn't help ward off the group legality checks, hence the unexpected error seen. The patch adds support for ANY_VALUE(expression) in queries undergoing the mentioned transform. We inject an ANY_VALUE wrapper around all columns moved to the select list of the extra derived table from the original query to columns in the grouped table iff all references to the column are made within the arguments of an ANY_VALUE call. Example: SELECT ANY_VALUE(i) AS i1, (SELECT i FROM t) AS subquery, SUM(i) AS summ FROM t; -> SELECT ANY_VALUE(`derived_1_3`.`Name_exp_1`) AS `i1`, `derived_1_4`.`i` AS `subquery`, `derived_1_3`.`summ` AS `summ` FROM ( SELECT SUM(`test`.`t`.`i`) AS `summ`, ANY_VALUE(`test`.`t`.`i`) AS `Name_exp_1` # add ANY_VALUE FROM `test`.`t`) `derived_1_3` LEFT JOIN ( SELECT `test`.`t`.`i` AS `i` FROM `test`.`t`) `derived_1_4` ON(TRUE) WHERE TRUE Change-Id: I41a08074981ea1334b7b5c0cb955488711bd7d6d
…calar_subquery [ back-port ] Bug#36464947 Server crash with subquery_to_derived=ON [ This is a back-port of fix for Bug#36314993 which fixes Bug#36464947 seen on mysql-8.0. Bug#36314993 was not seen on mysql-8.0 itself, due to syntax that was only added later. The explanation below is taken Bug#36314993 but not accurate for the mysql-8.0 repro. Underlying cause probably is though. ] I refer to the simplified repro with ROLLUP, not CUBE in the below. There are two issues. 1.In the rollup query we have SELECT ( SELECT f2 FROM (SELECT 1 GROUP BY ROLLUP(1)) AS dt1 GROUP BY f1) AS a, MIN(f1) FROM t1; In preparation for transforming the scalar subquery, we move the aggregate MIN(f1) into a derived table, thusly: SELECT (SELECT derived_1_4.Name_exp_2 FROM (SELECT 1 GROUP BY '' WITH ROLLUP) AS dt1 GROUP BY derived_1_4.Name_exp_1) AS a, derived_1_4.`MIN(f1)` FROM (SELECT MIN(t1.f1) AS `MIN(f1)`, t1.f1 AS Name_exp_1, t1.f2 AS Name_exp_2 FROM t1) AS derived_1_4 As can be seen the original occurences of f1 and f2 in the scalar subquery have been replaced with another (outer) reference: derived_1_4.Name_exp_1 for f1 and derived_1_4.Name_exp_2 for f2. f1 is induced as a hidden field in the select list along with visible f2, i.e. it is scalar. We lost the hidden flag for f1 alias derived_1_4.Name_exp_1, which lead to the first assert: it was no longer a scalar subquery as expected. We now made sure to inherit the hidden flag in Item_field::replace_item_field. To facilitate this we had to make a copy of f1 in Query_block::transform_grouped_to_derived lest we clobber its hidden status when calling add_item_to_list a few lines further down. 2. When doing the replacement of t1's fields after moving it into the derived table, we correctly detected that (hidden) f1 in the select list and f1 in the group by list are outer references, cf. the logic in Item_field::replace_item_field. However, we created two different items for those two occurences, which caused another assert when creating the tmp table; we now make sure we use only one field, cf. the new member field in struct Item_field_replacement: m_outer_field. Now, in this case, the whole exercise of moving the grouping into a derived table turns out to be useless, as we can't transform the scalar subquery anyway (correlated but no equality clause, rollup). But we don't know that this early in the transform, so it is done and should work. The other repros no longer crash, and are included in the tests. Change-Id: Iac3e35bdbf2f0385e1e4e33083f4e2d19060036d
Change-Id: Idb8e4793a3363a1aed6c6b8789445dec8ac462fd
Description: ------------ The UBSan reports "load of value 16, which is not a valid value for type 'bool'" error when running |Rpl_commit_order_queue_test.Simulate_mts| test. Analysis: --------- The undefined behaviour identified by UBSan is caused by using test()/test_and_set() methods on default constructed std::atomic_flag object as in this case it has undeterminate initial value (until C++20). Fix: ---- Although it is possible to fix this issue and continue using std::atomic_flag in the problematic unit test, there is no strong reason to prefer it over std::atomic<bool> in that place. Switching to std::atomic<bool> simplifies the test code and improves readability. As a result, all instances of std::atomic_flag have been replaced with std::atomic<bool>, ensuring that code no longer accesses uninitialized variables. Additionally, the missing code formatting has been applied in a couple of places, within the same test file. Change-Id: I4ed03bb6633d61afb7a842f9ea37e7578c42e71a
function in 8.0. POST-PUSH FIX: Fixed test case failure in `main.partition_exchange`. Change-Id: Ia31857196037619237d07f7484e452c6b2341fc7
Approved-by: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com>
Approved-by: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com>
INFORMATION_SCHEMA Post push fix for test failures: 1. Re-recording tests partition_upgrade_5727_mac_lctn_2 and partition_upgrade_5727_win_lctn_2. 2. I_S.INNODB_COLUMNS , I_S.INNODB_TABLES output varies with type of build, hence made the test more generic. Change-Id: I54697bfd602c19212fe78b0af099c79223d66810
…x digits [8.0 patch] Problem: In semi-sync replication when the binary log suffix exceeds six digits(.999999) and becomes binlog-name.1000000 the replication moves from semi-sync to async. Analysis: The binlog file naming convention in MySQL utilizes the prefix 'binlog-name.' followed by a suffix number. When the suffix number is within six digits, it is zero-padded to maintain the length (e.g.,binlog-name.000001). However, once the suffix exceeds six digits, the number is not trimmed or padded, resulting in names such as binlog-name.1000000 (1 followed by six zeros). During replication, the ActiveTranx::compare function, which is implemented using strcmp, compares the binlog positions. Since strcmp only returns the relationship of the first non-matching character, it may incorrectly determine that mysql-bin.1000000 comes before mysql-bin.999999. This incorrect comparison can lead to source no longer sending "confirmation-needed" data packets to the replica or waiting for confirmations from the replica. As the source stops waiting for replica's acknowledgments, semi-synchronous replication effectively degrades to asynchronous replication. Fix: In the ActiveTranx::compare function, when comparing the binlog sequence numbers, the length of the binlog filenames is compared first; if the lengths are identical, then comparison is made using strcmp. Change-Id: Ic6283781706fbe06ad95fe90e6e2988efdbb88c4
If the system variable MAX_JOIN_SIZE is set, the optimizer choses another code path than normal in Query_exression::optimize - it calls EstimateRowAccesses and compares it with the maximum allowed in MAX_JOIN_SIZE and gives an error if too large. Now, in this query, EstimateRowAccesses arrives in EstimateRowAccessesInItem and the item in question is a Item_singlerow_subselect. Its query_expression doesn't have a value set for its root_access_path, so we next call EstimateRowAccesses with a path argument taken from Item_singlerow_subselect::root_access_path(), which is always nullptr. This eventually leads to the issue, when trying to access the empty path. Solution: skip calling EstimateRowAccesses is the path is nullptr. For the hypergraph optimizer, we don't get into this problem because Query_expression::m_root_access_path is always set, cf. this code: if (thd->lex->using_hypergraph_optimizer()) { // The hypergraph optimizer also wants all subselect items to be optimized, // so that it has cost information to attach to filter nodes. for (Query_expression *unit ...) { unit->optimize(thd, ...) <-- leads to creation of access path for QE return true; } : } so in EstimateRowAccessesInItem: if (qe->root_access_path() != nullptr) { path = qe->root_access_path(); <------- } else { path = qe->item->root_access_path(); } we get a non-null path from QE's m_root_access_path. This is not always so for the old optimizer. Change-Id: I7966d347900014925021fd74f869dee52df8f7ab
…unfixable When SQL functions are improved, they may throw SQL errors in situations where they previously did not. If this happens in a table's constraints, default expressions, partitioning, or virtual columns, the table can not be opened. This will prevent both analysing the problem (e.g. with SHOW CREATE TABLE) and addressing it (e.g. with ALTER TABLE ... DROP ...). * Post-push fix: Fixing interplay with some other (lctn) test cases that also rely on hand-crafted databases. Change-Id: I69b800a5b0e007894e82cf3065e582d4438d261a
…it fails on pb2 daily trunk Problem: The test rpl_semi_sync_binlog_suffix_exceed_six_digit.test which was added as part of BUG#37024069 is failing when run with gtid_mode=ON Analysis: The statement CHANGE REPLICATION SOURCE SOURCE_LOG_FILE gives error when running with gtid_mode=ON as SOURCE_LOG_FILE cannot be set when SOURCE_AUTO_POSITION is active. Fix: We are not adding any extra coverage by running this test with gtid_mode=ON so move the test from rpl suite to rpl_nogtid. Change-Id: I86da78a1d5406b0e497abe8fbb0b2f596f31c85f
In the deb packaging we run a few install commands that change ownership of files to the root user. This means the build has to run as root, or with fakeroot. This isn't recommended or required (downstream packaging installs these files without changing ownership), and in Ubuntu 25.04 the default config does not allow builds to use root, so it fails. Removed the ownership change from these calls to let the build run rootless. Change-Id: I318693eb11ffc193c2d8b43c746685786f8366e7
In Fedora 42+ /usr/sbin is merely a symlink to /usr/bin, try to detect this and change paths as needed. Change-Id: Ideba537092e21a6c809967302abc56d30ee25c87
Follow up fix for compilation problem: storage/ndb/src/kernel/vm/Configuration.cpp:656:2: error: extra ';' outside of a function is incompatible with C++98 [-Werror,-Wc++98-compat-extra-semi] Change-Id: I8bdbf9d3f68e1805f76bf68f455d85fd570ce45a
Add a few missing includes in router, abseil and X plugin Change-Id: Ia974396b2fc02a6fa981025c1c16f46e12493668
…any columns" This reverts commit 8504c085229ffd9d80df826798220053d4180f17. This commit is the fix for Bug#37510755 and is being reverted. Change-Id: I4e874f491f856afd900bd6da6479e5e15fd9a03d
On Fedora 38: make build_id_test Linking CXX executable ../runtime_output_directory/build_id_test Verifying build-id egrep: warning: egrep is obsolescent; using grep -E Fix: use cmake regexp instead. Change-Id: I05b9167279b6cfcdf770d87bb5ba98792a158691 (cherry picked from commit a5451a65b642c2c940c19c1d35cd32cb6dc1d14f)
Set C locale when invoking /usr/bin/eu-readelf Change-Id: Ib628a6c5276a1903caff75d403616e7ff4dd204a (cherry picked from commit d9ad0253b147981b276d523a1395a6e7ea92d969)
generated_message_tctable_lite.cc:85:73: error: cannot tail-call: address of caller arguments taken 85 | PROTOBUF_MUSTTAIL return GenericFallbackImpl<MessageLite, std::string>( | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 86 | PROTOBUF_TC_PARAM_PASS); There is a bug reported for this: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119376 This patch simply disables clang::musttail for non-clang compilers. The problematic code is apparently fixed upstream, with a patch called: "Remove spill hack from SingularBigVarint." protocolbuffers/protobuf@bc56c34 Change-Id: Ida1a31f9027665ad4bd375b30843442bd9923b79
autocommit=OFF Query rewrite plugin reload rules logic runs in an independent thread created when a request is submitted. This thread runs the reload logic and exits. When the server is in global autocommit OFF mode, the new thread and its session will also run with autocommit OFF mode. At the end of the reload execution, the code does a statement commit and exist. Statement commits does not preserve the changes in autocommit OFF mode. Due to this, after the thread exits, any table data changes done will be lost. Fix: Perform a transaction commit to ensure all changes are preserved before exiting the session/thread. Change-Id: Ib6f96873f90d99b2c53bb0157e91b160760b933e
… init file Problem ======= An init file having a single line with multiple SQL statements, like the one below: SET character_set_client = 'cp850'; SET sql_log_bin = OFF; ALTER USER 'root'@'localhost' IDENTIFIED BY 'pass'; throws some errors during init, viz: On Unix systems, we get to see : "1064 : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use..." On Windows, the behavior is different, it throws an assertion that the query size is greater than the max query size. Analysis ======== The problem occurs when one single line contains multiple SQL statements. Fix === When dealing with a line having multiple SQL statements, the statements are fetched one-by-one from the current line. For each statement that has been fetched from the line, append it with a null character to fix the issue. Change-Id: I06f56502dda88a1f41b8d40176201c517c32b985
is killed via SQL statement Description: ------------ When the applier thread is terminated using SQL KILL statement, there is no specific error logging indicating that the termination was due to this command. Instead, a generic error message is logged. 'Fatal error during execution on the Applier process...' Analysis: ------------ The current error message does not sufficiently differentiate between an applier application error and an intentional termination via KILL, leading to potential confusion. To improve clarity, the error log should explicitly indicate when the applier thread is killed using SQL KILL statement. Fix: ---- Enhance logging to include a specific message when the applier thread is terminated via SQL KILL statement. Change-Id: Ia059be50a44f122ea6792faa8633ec3ff420218f
…ascade update [post-push fix] The function used by virtual field calculation to return the new field value for a base (non-virtual) column could instead return a value for a virtual column. Fixed so that only non-virtual field values may be returned. Change-Id: I9d6f16012f04a6ea33b265896bf7204cc76fa9ba
Problem: -------- When importing a table with a secondary index after an instant column drop there was a failure due to using incorrect fields mapping both in index rec_cache and in array of fields. Fix: ---- 1. Array of fields used by secondary index is rebuilt during table import with column drop to make sure it reflects actual list of fields after an instant drop. 2. rec_cache is set to empty after instant column drop because it is used only for intrinsic tables. Change-Id: I66d2c02f90f95205cc7f28b5f036656a256f15ad
…RROR 241, MSG: INVALID SCHEMA OBJECT VERSION This is a forwardport of: BUG 31848270 - EXCLUDING BLOB COL FROM EVENT DEFINITION RESULTS IN ERROR 241, MSG: INVALID SCHEMA OBJECT VERSION 3-23954393131 createEventOperation fails with 241:Invalid schema object version Workaround fixing skipping Blob event validation in non-MySQLD NdbApi applications. Change-Id: I900e24f765747d4a032cffb736e1480926dc50f8
When router fails to resolve the destination adresses of the cluster-nodes due to a temporary DNS outage, it closes the acceptor sockets but sporadically fails to open the acceptor socket again. Background ========== If DNS comes back without 10 seconds, 1. the metadata-cache opens the acceptor socket again 2. the routing-connect waits for the PRIMARY failover for 10 seconds and closes the acceptor again (without anyone ever opening them again). Resolver errors should not lead to a "wait-for-PRIMARY-failover" as the wait should only be done if the connection to the primary failed. In case of DNS failures the PRIMARY may be up and running without any intention to every switch to another node. Change ====== - handle resolver errors correctly to not lead to a wait-for-primary-failover Change-Id: I11ff612063fa53333ad29c7ccdbaea11b9123dc1
Approved by: Erlend Dahl <erlend.dahl@oracle.com> Change-Id: Idbd2d69c47175d12dc1a3610d846c8f20773a0ce
Approved by: Erlend Dahl <erlend.dahl@oracle.com> Change-Id: If31440c12fee6a563e16fcdd2455cc280b9b4f5d
m_ptr + data_size < m_bounds.second PROBLEM: 1. After the fix for the "Bug #37233273 - DDL buffer increase has out of control memory usage", this assertion was seen. 2. The fix for Bug #37233273 basically accounts for meta data as part of innodb_ddl_buffer_size buffer which reduces the number of rows which can fit in a key buffer block while adding an index. 3. When the key buffer is full, the records in the key buffer are added to the temporary file, which constitutes a chunk. This chunk is smaller than actual key buffer size. 4. The offsets of each chunk is recorded in a queue. The chunks sizes are multiples of the sector size (4KB). 5. When merging the two chunks into one sorted chunk, we read from this queue. 6. Another bug, Bug#36444172 "Regression in 8.0.27+ related to Parallel DDL IO Amplification" read the sizes of the first two chunks from the queue and set a limit on the input buffer based on the size of the first two chunks. 7. In our case, the sizes of the first two chunks were smaller compared to the subsequent chunk sizes. 8. Therefore, while reading the rows from subsequent chunks we hit this imit on the input buffer. FIX: 1. Each ddl thread is allocated a certain amount of input buffer based on the number of scan threads and the DDL buffer size. We should use this allocation as the limit for the input buffer while reading the rows from a chunk. Change-Id: Iba4a224c9973bea2b5a6a2288a23473dd2c21b06
https://perconadev.atlassian.net/browse/PS-10074 The download URL of Percona-Server-shared-56-5.6.51-rel91.0.1.el7.x86_64.rpm that percona-server-8.0 is dependent on is incorrect. Even If Percona-Server-shared-56-5.6.51-rel91.0.1.el7.x86_64.rpm download by using current download URL, we can't expand correctly Percona-Server-shared-56-5.6.51-rel91.0.1.el7.x86_64.rpm. Because the Percona-Server-shared-56-5.6.51-rel91.0.1.el7.x86_64.rpm is downloaded as HTML file (not RPM) currently.
…ng_key_grammar fix) https://perconadev.atlassian.net/browse/PS-10041 Fixed problem with logical conflict between Oracle's fix for the Bug #37436220 "MySQL Server SEGV within JOIN::refresh_base_slice" (commit 026aac7@mysql/mysql-server) and Percona's implementation of the 'CLUSTERING KEY' extension which caused 'main.tokudb_clustering_key_grammar' MTR test case to fail. Introduced new 'AT_CLUSTERING_KEY_COLUMN_ATTR' value in the 'Attr_type' enumeration. 'PT_unique_combo_clustering_key_column_attr' class split into two: 'PT_unique_key_column_attr' and 'PT_clustering_key_column_attr'.
…ers fix) https://perconadev.atlassian.net/browse/PS-10041 Re-recorded a bunch of MTR test cases because Oracle also introduced the '$do_not_echo_parameters' parameter in the 'start_mysqld.inc' MTR include file in the fix for the Bug #37697512 "InnoDB: page_track.read_only big test fails on Windows" (commit 0f118ef@mysql/mysql-server). However, Oracle's implementation (in conrtast to Percona's one) does not print extra "<hidden args>" label. For symmetry reasons similar changes made by Percon in 'kill_and_restart_mysqld.inc' and 'restart_mysqld.inc' also modified to omit printing "<hidden args>" label.
https://perconadev.atlassian.net/browse/PS-9647 Force inlined rec_init_offsets_new() - the same as it is in original Enchanced Mysql version.
…nary table for component_masking_functions (gen_range fix) https://perconadev.atlassian.net/browse/PS-9148 Fixed problem with 'gen_range()' UDF not being able to properly handle negative numbers. This function internally used to call 'masking_functions::random_number()' function that used to always accept 'std::size_t' arguments. In case when negative 'long long' arguments (extracter from UDF context) were passed to this function, they were implicitly converted to 'std::size_t' resulting in huge numbers and causing an assertion in 'std::uniform_int_distribution' constructor that expected 'min' <= 'max'. Fixed by making the 'masking_functions::random_number()' a function template.
https://perconadev.atlassian.net/browse/PS-10041 Cleaned up orphaned MTR artefacts based on the output of the "mtr_checker" utility (https://github.com/Percona-Lab/mtr-checker). *** *** Suite: main *** *** [error]: inconsistent test case "m_i_db_config": [ cnf ] *** *** Suite: group_replication *** *** [error]: inconsistent test case "gr_gcs_psi_threads": [ result ] [error]: inconsistent test case "gr_ssl_tls13": [ result ] [error]: inconsistent test case "gr_ssl_mode_verify_identity_error": [ result, opt_master, opt_slave ] [error]: inconsistent test case "gr_skip_view_change_event": [ cnf ] [error]: inconsistent test case "gr_psi_event_names_enabled_check": [ result ] [error]: inconsistent test case "gr_ssl_tls13_runtime_invalid_configuration": [ result ] *** *** Suite: innodb *** *** [error]: inconsistent test case "percona_extended_innodb_status": [ opt_master ] *** *** Suite: ndb *** *** [error]: inconsistent test case "ndb_grant": [ result ] *** *** Suite: ndb_rpl *** *** needed [error]: inconsistent test case "ndb_rpl_multi_binlog_update": [ cnf ] needed [error]: inconsistent test case "ndb_rpl_3_cluster": [ cnf ] *** *** Suite: rpl *** *** kept for re-enablement [error]: found unknown file "rpl_critical_errors.result.txt" [error]: inconsistent test case "rpl_relay_log_space_limit_deadlock": [ opt_slave ] [error]: inconsistent test case "rpl_auto_increment": [ opt_master ] needed [error]: inconsistent test case "rpl_row_img": [ cnf ] [error]: inconsistent test case "rpl_invalid_replication_timestamps_multi_source": [ cnf ] [error]: inconsistent test case "rpl_row_jsondiff_basic": [ cnf ] needed [error]: inconsistent test case "rpl_row_jsondiff": [ cnf ] *** *** Suite: service_sys_var_registration *** *** needed [error]: inconsistent test case "sys_reg": [ cnf ] *** *** Suite: x *** *** [error]: found "macros" that is not a regular file *** *** Suite: engines/funcs *** *** [error]: inconsistent test case "rpl_init_replica": [ opt_slave ] *** *** Suite: ndb_big *** *** [error]: inconsistent test case "smoke": [ test ] [error]: inconsistent test case "ndb_verify_redo_log_queue": [ test ] [error]: inconsistent test case "ndb_multi_tc_takeover": [ test, cnf ] [error]: inconsistent test case "bug37983": [ test, opt_master ] [error]: inconsistent test case "my": [ cnf ] [error]: inconsistent test case "rqg_spj": [ test, opt_master ] [error]: inconsistent test case "bug13637411": [ test, cnf, opt_master ] *** *** Suite: ndb_ddl *** *** [error]: inconsistent test case "my": [ cnf ] *** *** Suite: ndb_opt *** *** [error]: inconsistent test case "my": [ cnf ] *** *** Suite: ndbcluster *** *** [error]: inconsistent test case "my": [ cnf ] *** *** Suite: ndbcrunch *** *** [error]: inconsistent test case "my": [ cnf ] [error]: inconsistent test case "cpubind": [ cnf ] *** *** Plugin suite: audit_log_filter *** *** [error]: inconsistent test case "log_format_json": [ result ] [error]: inconsistent test case "log_format_new": [ result ] [error]: inconsistent test case "log_format_old": [ result ] *** *** Plugin suite: keyring_vault *** *** [error]: inconsistent test case "table_encrypt_2_keyring": [ result, opt_master ] [error]: inconsistent test case "table_encrypt_2_directory": [ result ]
…anup) https://perconadev.atlassian.net/browse/PS-10041 Cleaned up orphaned MTR artefacts based on the output of the "mtr_checker" utility (https://github.com/Percona-Lab/mtr-checker). *** *** Suite: component_keyring_kms *** *** [error]: inconsistent test case "migration": [ opt_master ] *** *** Suite: data_masking *** *** [error]: inconsistent test case "error_not_loaded_module": [ opt_master ] *** *** Suite: rocksdb *** *** [error]: inconsistent test case "collation_exceptions_lctn_0": [ opt_master ] [error]: inconsistent test case "collation_exceptions_lctn_1": [ opt_master ] *** *** Suite: tokudb_rpl *** *** [error]: inconsistent test case "rpl_tokudb": [ opt_master ] [error]: inconsistent test case "rpl_mixed_row_tokudb": [ result, opt_master ] *** *** Suite: component_keyring_kmip *** *** [error]: inconsistent test case "migration": [ opt_master ]
https://perconadev.atlassian.net/browse/PS-10041 Fixed problem with invalid length parameter passed to 'inet_ntop()' function causing warnigs under compiler's FORTIFY enforcements.
PKG-870 Packaging tasks for release - PS 8.0.43-34
…oad_url PS-10074 (8.0) Fix invliad download URL for compatsrc
EvgeniyPatlan
approved these changes
Aug 29, 2025
percona-ysorokin
approved these changes
Sep 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.