Skip to main content
We’ve updated our Terms of Service. A new AI Addendum clarifies how Stack Overflow utilizes AI interactions.
  • 4gb4 GB ram is a joke these days. So, no - sorry. I think you are done on that end. It can work, but that requires specific usage patterns. I Wouldwould not run a db server on a 4gb4 GB hardware out of rincipleprinciple -16gb 16 GB of RAM costcosts pretty much nothing to start with.
  • scsi RaidSCSI RAID 5 is not optimal. Depending on usage patternpatterns, you should have a minimum of TWO groups - one fast for writes (log), one fast for reads (data). I have had guudgood success using a RAID 10 of 4+ discs for OS and LOG and another one for data. Mind you, though, the db was a lot larger. In your case throingthrowing out the RAID 555 and just putting in TWO mirrored SSD will make sense, given that your data is only 53gb53 GB. A Mirrormirror of two SSD will probably blow up your IO perforrmanceperformance by a factor of 100. YOuYou are likely IO bound with htethe help of your RAM being - by todaystoday's standards - pathetic. Sorry if that sounds rude, but a db server should have MORE ramRAM than a developer workstatoinworkstation, and depending on what company you are in you are on par or WAY below that.

Is there some standard approach or it is just experience and sense ?

Experience and sense. YOuYou also think forward and check what makes sense over a couple of years. For example, SuperMicro has NICE servers with place for 24-72 discs in a SAS configuration. SOSo, you get possiblyoonepossibly one to avoid using a SAN (more expensive) and fill in discs as you need. Others get a small server and then run out of options. You also can get some ideas from testing on a normal workstation.

It is standard story, there is fight between developers and administrators.

No. It is not.

One denunciation that database design and queries are bad while others says It is lack of hardware and amount of data.

No again. Db design can be measured pretty objectively. As in: there are certain documented and known approaches (that a LOT of developers are ´basicallybasically totally ignorant of). Ever heard of 5th normal form?

Same with queries. I can actually SEE wehterwhether a query is efficiently executed. There is no real grey area here. That said, there may be tradeoffs, but if that goes into a blaming game, then I can be pretty sure there IS something wrong.

Pretty often Developers dontdevelopers don't know anything beeyongbeyond "this is a simple select" and have no clue how to deal with a database and then try to throw hardware on the problem. Been there, seen that. Not always, but it is a likely guess.

  • 4gb ram is a joke these days. So, no - sorry. I think you are done on that end. It can work, but that requires specific usage patterns. I Would not run a db server on a 4gb hardware out of rinciple -16gb RAM cost pretty much nothing to start with.
  • scsi Raid 5 is not optimal. Depending on usage pattern you should have a minimum of TWO groups - one fast for writes (log), one fast for reads (data). I had guud success using a RAID 10 of 4+ discs for OS and LOG and another one for data. Mind you, though, the db was a lot larger. In your case throing out the RAID 55 and just putting in TWO mirrored SSD will make sense, given that your data is only 53gb. A Mirror of two SSD will probably blow up your IO perforrmance by a factor of 100. YOu are likely IO bound with hte help of your RAM being - by todays standards - pathetic. Sorry if that sounds rude, but a db server should have MORE ram than a developer workstatoin, and depending on what company you are in you are on par or WAY below that.

Is there some standard approach or it is just experience and sense ?

Experience and sense. YOu also think forward and check what makes sense over a couple of years. For example, SuperMicro has NICE servers with place for 24-72 discs in a SAS configuration. SO, you get possiblyoone to avoid using a SAN (more expensive) and fill in discs as you need. Others get a small server and then run out of options. You also can get some ideas from testing on a normal workstation.

It is standard story, there is fight between developers and administrators.

No. It is not.

One denunciation that database design and queries are bad while others says It is lack of hardware and amount of data.

No again. Db design can be measured pretty objectively. As in: there are certain documented and known approaches (that a LOT of developers are ´basically totally ignorant of). Ever heard of 5th normal form?

Same with queries. I can actually SEE wehter a query is efficiently executed. There is no real grey area here. That said, there may be tradeoffs, but if that goes into a blaming game, then I can be pretty sure there IS something wrong.

Pretty often Developers dont know anything beeyong "this is a simple select" and have no clue how to deal with a database and then try to throw hardware on the problem. Been there, seen that. Not always, but it is a likely guess.

  • 4 GB ram is a joke these days. So, no - sorry. I think you are done on that end. It can work, but that requires specific usage patterns. I would not run a db server on a 4 GB hardware out of principle - 16 GB of RAM costs pretty much nothing to start with.
  • SCSI RAID 5 is not optimal. Depending on usage patterns, you should have a minimum of TWO groups - one fast for writes (log), one fast for reads (data). I have had good success using a RAID 10 of 4+ discs for OS and LOG and another one for data. Mind you, though, the db was a lot larger. In your case throwing out the RAID 5 and just putting in TWO mirrored SSD will make sense, given that your data is only 53 GB. A mirror of two SSD will probably blow up your IO performance by a factor of 100. You are likely IO bound with the help of your RAM being - by today's standards - pathetic. Sorry if that sounds rude, but a db server should have MORE RAM than a developer workstation, and depending on what company you are in you are on par or WAY below that.

Is there some standard approach or it is just experience and sense ?

Experience and sense. You also think forward and check what makes sense over a couple of years. For example, SuperMicro has NICE servers with place for 24-72 discs in a SAS configuration. So, you get possibly one to avoid using a SAN (more expensive) and fill in discs as you need. Others get a small server and then run out of options. You also can get some ideas from testing on a normal workstation.

It is standard story, there is fight between developers and administrators.

No. It is not.

One denunciation that database design and queries are bad while others says It is lack of hardware and amount of data.

No again. Db design can be measured pretty objectively. As in: there are certain documented and known approaches (that a LOT of developers are basically totally ignorant of). Ever heard of 5th normal form?

Same with queries. I can actually SEE whether a query is efficiently executed. There is no real grey area here. That said, there may be tradeoffs, but if that goes into a blaming game, then I can be pretty sure there IS something wrong.

Pretty often developers don't know anything beyond "this is a simple select" and have no clue how to deal with a database and then try to throw hardware on the problem. Been there, seen that. Not always, but it is a likely guess.

Bounty Awarded with 50 reputation awarded by adopilot
Source Link
TomTom
  • 52.1k
  • 7
  • 61
  • 142

  • 4gb ram is a joke these days. So, no - sorry. I think you are done on that end. It can work, but that requires specific usage patterns. I Would not run a db server on a 4gb hardware out of rinciple -16gb RAM cost pretty much nothing to start with.
  • scsi Raid 5 is not optimal. Depending on usage pattern you should have a minimum of TWO groups - one fast for writes (log), one fast for reads (data). I had guud success using a RAID 10 of 4+ discs for OS and LOG and another one for data. Mind you, though, the db was a lot larger. In your case throing out the RAID 55 and just putting in TWO mirrored SSD will make sense, given that your data is only 53gb. A Mirror of two SSD will probably blow up your IO perforrmance by a factor of 100. YOu are likely IO bound with hte help of your RAM being - by todays standards - pathetic. Sorry if that sounds rude, but a db server should have MORE ram than a developer workstatoin, and depending on what company you are in you are on par or WAY below that.

Is there some standard approach or it is just experience and sense ?

Experience and sense. YOu also think forward and check what makes sense over a couple of years. For example, SuperMicro has NICE servers with place for 24-72 discs in a SAS configuration. SO, you get possiblyoone to avoid using a SAN (more expensive) and fill in discs as you need. Others get a small server and then run out of options. You also can get some ideas from testing on a normal workstation.

It is standard story, there is fight between developers and administrators.

No. It is not.

One denunciation that database design and queries are bad while others says It is lack of hardware and amount of data.

No again. Db design can be measured pretty objectively. As in: there are certain documented and known approaches (that a LOT of developers are ´basically totally ignorant of). Ever heard of 5th normal form?

Same with queries. I can actually SEE wehter a query is efficiently executed. There is no real grey area here. That said, there may be tradeoffs, but if that goes into a blaming game, then I can be pretty sure there IS something wrong.

Pretty often Developers dont know anything beeyong "this is a simple select" and have no clue how to deal with a database and then try to throw hardware on the problem. Been there, seen that. Not always, but it is a likely guess.