I am looking to extract a very large dataset (1.5 million rows, or so) to Excel files per a request from a business partner. Since this is larger than the Excel row count limit, my thought was to split the extract in the MySQL code via LIMIT clause by adding the following at the bottom to a series of scripts extracting the same data:
MySQL Workbench has a different number where the value gets returned lower than expected - 9744. LIMIT 0,9745 on Workbench returns 9744, and so does 0,9744.
mysql.exe binary connecting to remote host has the same issue at the same limit as Toad Data Point, 10234.
Good to know... I just spent about 10 minutes googling to see if this issue was documented... not seeing anything yet... hopefully mySQL later versions have this fixed.
It's a weird one for sure. I was able to change one of the joins which worked around it. The left join I was using previously would join multiple data points from the right, resulting in duplicate rows, so I used a SELECT DISTINCT as part of the query. Seems like that plus the high row count may have been a trigger for it. I'll need to test this with MySQL 5.7 or 8.0 at some point to see if a server side fix was put into place discreetly.
That's how I started out my career - now a software/solutions architect!
Have a workaround, but have also escalated a case to Oracle. Given the difference between the threshold at which the LIMIT value behaves based on the different clients, I would think it is something on the client side.