logs archiveChat DB / Freenode / #mysql / 2015 / October / 1 / 1
akuzminsky
lost_and_unfound: why don't you just use any of existing backup tools? Approaches like this lead to most certain data loss
mgriffin
akuzminsky: not if you don't notice
lost_and_unfound
akuzminsky: what would be your fist suggestion for a backup tool?
akuzminsky
lost_and_unfound: mysqldump or xtrabackup depending on database size. they both do job well
lost_and_unfound
akuzminsky: I am all for a proper (even commercial) option, I just cant afford to have a restore take 24hours (like in this test case) as that amount of downtime is not acceptable in a disaster situation
akuzminsky
so i guess database is rather large?
lost_and_unfound
akuzminsky: I did try mysqldump, with various options but the combinations I tried was not too successful.
akuzminsky: yes, 32GB
akuzminsky
lost_and_unfound: mysqldump will be slower than xtrabackup for obvious reasons. Take full xtrabackup copy, apply redo log as soon as you take the backup and that would be the fastest.
lost_and_unfound
i did look at xtrabackup from percona, but the full toolkit does not play well with FreeBSD as some of the features/functions rely on /proc readings and and that is not part of the FreeBSD platform
badcom
Hi guys. Which GUI tool you guys use on Ubuntu? I installed Mysql-Workbench, but it's a bit sluggish and it hangs sometimes =/
akuzminsky
well, xtrabackup and toolkit are two separate products. why do you put them together?
lost_and_unfound: i used to maintain xtrabackup on freebsd and it worked well.
lost_and_unfound
akuzminsky: xtrabackup is part of the percona toolkit for the ports/packages in freebsd, will revist the option, maybe it was the 2am-i-cannot-focus-anymore attitude that i misread something.
akuzminsky
lost_and_unfound: as far as I know xtrabackup and percona-toolkit are two separate ports. At least it was so when I last checked FreeBSD.
lost_and_unfound
badcom: I only use mysql-workbench (ubuntu, debian and freebsd) What I have found that slowed my down was having the management open on processes where it monitors remote connenctions very 0.x seconds
akuzminsky
after all, I was xtrabackup port maintainer, so it used to be a separate port.
lost_and_unfound
akuzminsky: ah... see my previous comment... "2am-i-cannot-focus-anymore attitude " =]
akuzminsky: I will revisit the option, And I agree that self-rolled scripts may potencially create more harm than good
akuzminsky
lost_and_unfound: haha, don't touch the database then! it's too dangerous at 2am! :)
jge
Hey guys. I'm seeing these mysql logs being constantly generated and have no idea what's causing it. Anyone can shed some light please: http://is.gd/uHvZU6
lost_and_unfound
akuzminsky: ... dba checklist.. dont touch production db when: not having a solid backup [x], not paying attention [x], lanck of sleep[x], after 2am [x], when drunk [x]
akuzminsky
lost_and_unfound: it's practically impossible to meet all the requirements at the same time
jge: there is a dangling mysqld. check `ps` and kill them
mgriffin
jge: might try: mysqladmin shutdown
jge: i would do stop with init, then mysqladmin shutdown, then kill anything left over, then start with init script
lost_and_unfound
jge: like they all suggestied a process my still be hanging around, on ubuntu you can also try: sudo service mysql stop
mgriffin
lost_and_unfound: also, figure out why apt is broken
jge: ^
jge: you have different client vs server, maybe apt-get install -f
(or apt-get update && apt-get upgrade)
badcom
lost_and_unfound, I'll have a look at that. Thanks. I've just installed one called Valentina Studio which looks good
cluelessperson
Question, is an enterprise using mysql in the field an issue license wise?
darwin
not usually?
snoyes
you need to contact MySQL sales to be certain, but generally it's that if you distribute mysql or code that requires, you need a license or to make it open source, but you don't need one if you run mysql on your own servers and just serve web pages
lost_and_unfound
badcom: thanks, I have installed it will give it a try
jge
crazy, I used init.d to stop mysql then I had to use service mysql stop to stop the other processes
mgriffin
!t jge 14.04
ubiquity_bot
jge: https://bugs.launchpad.net/ubuntu/+source/mysql-5.5/+bug/1273462 https://bugs.launchpad.net/ubuntu/+source/mysql-5.5/+bug/1308431 https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1311810
jge
does mysql start separate processes depending on whether you use init or service upstart?
mgriffin
jge: tldr, use the os per the docs not like you assume it works
(this behavior is both a bug and user error)
jge
mgriffin: os docs say to use service mysql start/stop.. looks like it's a bug (per the docs linked) so should I just get rid of the init script?
mgriffin
jge: or ignore it and use "service"
jge
got it
mgriffin
jge: if you aren't the only admin, maybe strip the execute bit from the init.d
jge
thanks mgriffin
mgriffin
jge: also, bash completion can help with the "service" command, if your preference is due to /etc/init.d/<tab>
Vegitto
Hi. I want to have the id (key) increase automatically every time I insert new entries in the middle of my table? How can I do that?
Can it be done with FOREIGN keys, and, if so, any ideas how?
mgriffin
!t Vegitto gaps
ubiquity_bot
Vegitto: efforts to manually manage auto_increment values almost always end in tragedy (CorticalStack)
kolbe
Vegitto: what. "in the middle"?
Vegitto
kolbe: well, for example, I have id = 1, 2, 3, 4, 5, and I want to insert a new 3 (by increasing all of the other values).
kolbe
Vegitto: WHAT?!
Vegitto: no.
Vegitto
kolbe: That's exactly what I'm asking: what is the most efficient way to do this?
kolbe
i think that's even crazier than any of the other possible interpretations i'd come up with
Vegitto: there is no efficient way to the PK of some arbitrarily large number of rows. this is a terrible idea.
to *modify* the PK
Vegitto
kolbe: How do you suggest I do it then? I need PK for my entries. But I also often need to insert new entries all the time.
kolbe
Vegitto: don't worry about the PK value
Vegitto: why are you trying to manually manage these IDs? why do you care what value each row has? why do you want to change the ID value each time you insert a new row?
Vegitto: does this have something to do with chronology? or sorting? or some other thing? what is this data you're storing?
Vegitto
kolbe: yes, sorting
kolbe: but that's a fair point, though - it might not matter very much what the actual value is
kolbe
Vegitto: what is the criterion by which you really want to sort? what is the meaning of this value? is it arbitrary?
i.e. do you just say "i want to insert this arbitrary row and have it sort between these other 2 existing rows" without any other criterion that exists in the row data itself?
Vegitto
kolbe: pretty much, I'm sorting words in a book, with the structure (id, chapter, word)
kolbe: and I want them to go in order once SELECT a chapter;
kolbe: I also need to modify the edit all the time, by inserting words inbetween in each chapter
kolbe
eek
Vegitto
ok, ignore that last line; basically, I want to insert new words inbetween old ones within existing chapters
kolbe
you should definitely not achieve this by modifying the PK
mgriffin
or use implicit sort
Vegitto
so I might have: (1, 1, 'Hello'); (1, 2, 'World') - and I want to insert (1, 2, 'there') inbetween
kolbe
implicit sort?
Vegitto
Or, I don't really care what id 'there' has, but I need to be able to sort it.
Afterwards
kolbe
it'd be easy to have a column that linked a word to the previous word in the chapter, but actually extracting the rows from the DB in sorted order is not so easy
mgriffin
sorry, don't use
Vegitto
So that I have 'Hello there World'.
And I will be manipulating words all the time, inserting and deleting them.
kolbe
what are you doing that causes you to frequently insert words between other words?
Vegitto
Editing the text.
kolbe
i don't know how folks usually implement this kind of thing, but you'll find that doing it efficiently in a DBMS is a big challenge
Vegitto
Another option would be to just get rid of an entire chapter and reinsert it anew at the end of the table (with fresh IDs). I could do that too, as the chapters are not too long.
kolbe
re-inserting the entire chapter every time you make an edit sounds bad to me
Vegitto
kolbe: yes, but so does manipulating the IDs appear to be
kolbe: actually, I would only reinsert it from time to time (one the user clicks 'save'), or maybe also autosave every so often, so i guess that's fine
kolbe: We're talking a couple thousand words a chapter, so it's not too bad, possibly. Although I would welcome having a more efficient way to do this.
kolbe and mgriffin : thanks
kolbe
Vegitto: i don't really have any great recommendations for you. storing graphs in a DBMS is easy, but querying them is nearly impossible.
Vegitto
kolbe: that's fair enough - thanks!
badcom
lost_and_unfound, how did you disable the management?
danblack
badcom: a statement that has been made by employees under duress for many years. :-)
badcom
=P
adv_
why am i getting Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 24 bytes) ? on line foreach($db->query($selectQuery) as $row) {
thumbs
adv_: because you need to ask ##php
adv_: or whatever channel pertaining to your scripting language
« prev 1 2 3 4 5 6 7 8 9 next »