fuelsraka.blogg.se

Rick kubes
Rick kubes







rick kubes

The upside: this can be a lot faster than optimize. In a production environment you'll want to create it first, then drop the old version. You manually drop the key by name, you manually create it again. This will compact the data and rebuild all indexes.ĭespite not being officially recommended, I highly recommend the OPTIMIZE process on write-heavy tables up to 100GB in size.Ģ) ALTER TABLE DROP KEY -> ALTER TABLE ADD KEY This means you need the storage for the temporary file and depending on the table a lot of time (I've cases where an optimize takes a week to complete). Innodb doesn't support optimization so in both cases the entire table will be read and re-created. In my personal case I've seen performance drop from ~15 minutes for a count(*) using a secondary index to 4300 minutes after 2 months of writing to the table with linear time increase.Īfter recreating the index the performance goes back to 15 minutes. When it comes to large tables with hundreds of GB data amd rows and a lot of writing the situation changes, indexes can degrade in performance. In most practical cases innodb also does a good job, you do not need to recreate indexes. In most practical cases innodb is the best choice and it's supposed to keep indexes working well. Since mysql 8.0 myisam is slowly phasing into deprecated status, innodb is the current main storage engine. To date (mysql 8.0.18) there is no suitable function inside mysql to re-create indexes.









Rick kubes