logs archiveChat DB / Freenode / #mysql / 2015 / November / 19 / 1
Narcarsiss
Hello, Has anyone every done MySQL Reporting than can give me an Example of what it looks like from a real world scenario? I just have Two tables (original) and (changes made) I'm stumped as google don't help me much. Cheers :)
DaddyEric
dbforge dude
devart.com
Narcarsiss
DaddyEric, Much Appreciated. I'll get into it.
DaddyEric
anyone in here use Devart DBForge Studio for writing reports?
i am trying to simplify a running tally similar to reports having page numbers 1 of 2 etc but for every entry in the report
i have them grouped by several things
poncha
is it possible to find which thread in mysql is the one that is hogging cpu?
salle
poncha: Not really
poncha
i've never seen this kind of stuff before - according to ps, mysqld consumes 97736% cpu
salle
poncha: Not easily I mean. You can dig into performance schema
poncha
not in 5.1 eh
back to square 1 ;)
hmm... how is it possilble that Opened_tables=0 ?!
or is this only relevant to myisam?
tanj
or all tables went into table_cache
or you did show status instead of show global status
salle
tanj: I'd bet a beer on the latter
poncha
right :0
not global
why would a select query be in "Sending data" after i kill the thread? (COMMAND=Killed STATE=Sending data)
it's not like there is stuff to roll back ..
jkavalik
and what is the query?
poncha
here's an example: SELECT `u`.* FROM `users` `u` WHERE `u`.`deleted`=0 AND `u`.`ignored`=0 AND `u`.`blacklisted`=0 ORDER BY `u`.`user_name`
jkavalik
does it create a temporary table?
poncha
right now it resorts to filesort (explain shows "Using where; Using filesort")
jkavalik
is it MyISAM or InnoDB table? (in innodb the redo log can contain lot of data if many writes happened during the select execution, but I have no idea if some of it is purged on query end - I would suppose it is async)
poncha
innodb
it is select though... why would it affect redo log?
jkavalik
and what version f mysql? 5.1? how old?
poncha
5.1.63
pretty old ;) even uptime since last boot is 430 days
jkavalik
selects run with repeatable-read by default, so the data for the snapshot have to be somewhere - I think that it is stored in redo log - "Records in a clustered index are updated in-place, and their hidden system columns point undo log entries from which earlier versions of records can be reconstructed."
http://www.psce.com/blog/2012/05/15/why-do-threads-sometimes-stay-in-killed-state-in-mysql/#comment-236
but no idea if the undo log may be the reason, or slow temp table drop, or just the select code not checking often enough that it has been killed
poncha
"not fast enough" would be veery slow ;) -- 2507 Killed SELECT
tanj
salle: you win :)
poncha
the whole table is ~4k rows. and it took ages (and now ages to kill)
tanj
4k rows?
salle
poncha: "sending data" means it is actually executing the query and KILL is not forcing
poncha
yeah
tanj
that's nothing
poncha
exactly
tanj
it should return in a few seconds
it should return in a few milliseconds
poncha
this query usually is done in 0.0 seconds
tanj
:)
I guess something went wrong with the server
poncha
so this is a symptom of a lrager disease
tanj
most definitely, yes
poncha
i just cant figure out what yet
tanj
any i/o or cpu issues?
"iostat -mx 1" is your friend
or any other concurrently running queries, global lock etc? shown in show processlist
poncha
there's high iowait... the disk writes ~28M/sec
jkavalik
swap?
poncha
no, the data disk - where mysql datadir is
it's a raid0 of 10 gen-purpose ebs ssd nodes (amazon aws)
aha. 28M but ~4k tps ...
so a lot of small disk writes
Diplomat
hey guys i have a little issue here.. for some reason my database is "read only"
in file system mysql owns all databases of course
tvenhola
what's your error?
Diplomat
fixed it.. I had innodb_force_recovery = 1
xdexter
you can generate the hash of a mysql User through the bash command line? Or a web site?
mikkeljuhl
If I have 3 tables. One is for a truck. Another is for a driver. And the last is for a trip. That gives me three IDs which will be linked together, on occasion. What is the best way to structure that database?
Will it include having 4 tables. 3 of the ids and then the fourth linking them together? Or will it be 3 tables where I update each of them everytime a new link is created?
jkavalik
mikkeljuhl, it depends on you use case - can driver use multiple trucks on the same trip? on different trips? can the trip exist without a driver and truck?
mikkeljuhl
jkavalik: The trip will be created at first, with no driver or truck assigned. The driver can only use one truck on the same trip. But different trucks for another trip,
Makes sense?
jkavalik
mikkeljuhl, so "trip" is more of a "route" actually ?
mikkeljuhl
jkavalik: Yes!
jkavalik
and can driver go the same route multiple times? with the same truck each time or may those differ?
mikkeljuhl
A new "route/trip" would be issued if it is the same distance (because a new shipment) Truck may be same or differ
jkavalik
so it really is a trip - route would contain only a "list" of possible routes (as drivers are a list of persons and trucks a list of vehicles)
poncha
dammit. how do i force it to free the mem? :( everything is locked out (max connections is lower than curently connected threads) and i've set innodb_max_dirty_pages_pct=0, but dirty pages count does not decrease... and mem is not freed at all :(
mikkeljuhl
I suppose
jkavalik
mikkeljuhl, as you write it, the trip is an "instance" - so have tables of drivers and trucks, which do not change much, and then a table of trips with (driver_id, truck_id) columns, both nullable and NULL before anything asigned to them - and update with the proper IDs when decided
mikkeljuhl
Correct, that makes sense..! Thanks!
poncha
Innodb_buffer_pool_pages_data=380782, Innodb_buffer_pool_pages_dirty=20232, Innodb_buffer_pool_pages_misc=12434 ... pages_dirty "dances" - goes down but then goes up again. and pages_misc does not decrease ... and *all* the connected threads except for the one i'm using for console are killed
mikkeljuhl
jkavalik: Let's say a trip has a destStart, 2 via destinations, and a destEnd - would you then create another table called subTrip and have all those information therein?
jkavalik
mikkeljuhl, not in case EACH trip has exactly 2 via destinations, but otherwise probably yes - if you need to store the same data about each subtrip (length, timestarted, ...), or maybe a table for trips and second one for "via" (with trip_id, order in the trip and other info for each)
mikkeljuhl
:jkavalik Okay, let's say I want to check if a driver is currently on a trip. Would it make more sense to check the trip table for all instances of the driver and see if all those trips date has passed - or have a boolean value on the driver table, called "on_trip"? I guess the problem arises with cronjobs and so oo?
*on
jkavalik
mikkeljuhl, do you always want to check only if the driver is on a trip NOW? or maybe if they were on a trip at any specific time? storing the "on_trip" sounds like a bit of denormalization to me, you might do that for performance reasons, but thats only an optimization to be done later if needed
Morg0th
Hello, I have this table: http://pastebin.ca/3261276 and I would like to speed up the prefs when doing "select itemId, quantity from Collection where accountFk = '<accountId>'". Would it be better if i added an index on accountFk? Or is there already one because it's a part of the primary key?
poncha
is there a way to signal to mysql (via console, by SQL) and keep current connection alive till the end to be able to monitor the progress ? :(
**signal to shutdown i mean
Naktibalda
Morg0th: you don't need a separate key for accountFk, because it is the first field in PK
Morg0th
Naktibalda: thanks, ok so it means that i can't really speed that up then?
Naktibalda
is it slow?
watmm
What's the difference between innodb_support_xa and the double-write buffer?
Morg0th
well it's not so fast when there are many lines, but i guess i will see if i can improve on other parts then, maybe it's not THE biggest bottleneck
snoyes
watmm: XA is a system for supporting transactions across systems. So if you have some rows to modify in MySQL, and also some rows to modify in MSSQL, and also some file to write to tape, and you want all of that done together or not at all as one big transaction, you use XA. The doublewrite buffer is a file flushing technique that makes it so that InnoDB is more crash-stable.
doublewrite is basically somebody gives you a phone number, you write it on your hand, and then you go and copy it into your phone's contact list. If you drop your phone and it breaks before you get a chance, you can still go look at what you wrote on your hand and enter it into your new phone.
phil22
hello
I have a table which should be filled by million (or perhaps billion) of rows
there are few fields, in particular a "code" field which is a random string of 20 chars
I should generate a million of rows at a time, so for each row I need to generate the random code (which is unique) and dump into the db
I need to optimize this step
1 2 3 next »