By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Website Topics - Something is very wrong with this site - somebody help?!?

did you just lose updates to the sales numbers or was the Wii cutback intentional?



Around the Network

So that's why there were errors all over the place.




Nintendo still doomed?
Feel free to add me on 3DS or Switch! (PM me if you do ^-^)
Nintendo ID: Mako91                  3DS code: 4167-4543-6089

Wasnt expecting those high ps3 sales, wii sales explode, and 360 sales atart to slow..



 

 2008 end of year predictions:

PS3: 22M

360: 25M

wii: 40M

Sounds like a server/hard disk issue - rather than a database issue.

Databases are designed to be resilient, reboots should only affect data going in at the second.

Even a reboot (done remotely I presume) should commit all data, and close everything before restarting. 

...

I presume its running a Linux server, running XFS or something like that? God I hope the site is not running on a Windows server ;)

 



Gesta Non Verba

Nocturnal is helping companies get cheaper game ratings in Australia:

Game Assessment website

Wii code: 2263 4706 2910 1099

shams said:

Sounds like a server/hard disk issue - rather than a database issue.

Databases are designed to be resilient, reboots should only affect data going in at the second.

Even a reboot (done remotely I presume) should commit all data, and close everything before restarting.

...

I presume its running a Linux server, running XFS or something like that? God I hope the site is not running on a Windows server ;)

 


Even though I agree with your general points, I'd find it very surprising if several days worth of data were lost due to disk issues.

 



My Mario Kart Wii friend code: 2707-1866-0957

Around the Network

there's nowt wrong with a windows server as longf as your using at least sql server 2005. we run a 50gb database with a 15 min incremental backup for reporting services at work that support a web site that takes around 40 grand a day and 250 in house connections, never had a problem (well thats not strictly ture, we've had a couple but nothing major).

Anyway - Cant see it being the database to be honest, is it only 1 table thats getting corrupt or several, is those lost records causing any orphan recordsin the other tables?



Those people that think they're perfect give a bad reputation to us who are... 

"With the DS, it's fair to say that Nintendo stepped out of the technical race and went for a feature differentiation with the touch screen, but I fear that it won't have a lasting impact beyond that of a gimmick - so the long-lasting appeal of the platform is at peril as a direct result of that." - Phil Harrison, Sony

other than that all I can think of that may possibly corrupt a table is duplicate primary keys or null values in the pkey field.



Those people that think they're perfect give a bad reputation to us who are... 

"With the DS, it's fair to say that Nintendo stepped out of the technical race and went for a feature differentiation with the touch screen, but I fear that it won't have a lasting impact beyond that of a gimmick - so the long-lasting appeal of the platform is at peril as a direct result of that." - Phil Harrison, Sony

NJ5 said:
shams said:

Sounds like a server/hard disk issue - rather than a database issue.

Databases are designed to be resilient, reboots should only affect data going in at the second.

Even a reboot (done remotely I presume) should commit all data, and close everything before restarting.

...

I presume its running a Linux server, running XFS or something like that? God I hope the site is not running on a Windows server ;)

 


Even though I agree with your general points, I'd find it very surprising if several days worth of data were lost due to disk issues.

 

Even if power was cut at the exact same time a commit is being made, the most that should happen is losing that commit. Basically the real question is is it the actual database that is corrupted (in other words, can MySQL simply not read it or read it wrongly) or the data corrupted (in other words, some of the data got changed, or breaks integrity constraints, or other things).

I don't see how you could LOSE data, since the commit should save the data to disk instantly, no need to "save" or anything with MySQL. Unless the partition is mounted unionfs or async, which would be very unlikely. So the most likely scenario is that some data was deleted somehow (either a mistake in your code or an SQL injection attack), or some data was stored in memory but not actually committed to the database.

I agree with NJ5 about this not likely to be a hardware issue, but not knowing much about the problem makes it difficult to determine the cause.



Help! I'm stuck in a forum signature!

You could have a broken drive that lies about data being written to the disk and that can cause database corruption. This is most often the case with cheap disks (ie: some SATA disks), but buggy SCIS and SAS have been known to mess up too.

With mysql, you have better crash recovery with INNODB tables than MyISAM.



omgwtfbbq said:
NJ5 said:
shams said:

Sounds like a server/hard disk issue - rather than a database issue.

Databases are designed to be resilient, reboots should only affect data going in at the second.

Even a reboot (done remotely I presume) should commit all data, and close everything before restarting.

...

I presume its running a Linux server, running XFS or something like that? God I hope the site is not running on a Windows server ;)

 


Even though I agree with your general points, I'd find it very surprising if several days worth of data were lost due to disk issues.

 

Even if power was cut at the exact same time a commit is being made, the most that should happen is losing that commit. Basically the real question is is it the actual database that is corrupted (in other words, can MySQL simply not read it or read it wrongly) or the data corrupted (in other words, some of the data got changed, or breaks integrity constraints, or other things).

I don't see how you could LOSE data, since the commit should save the data to disk instantly, no need to "save" or anything with MySQL. Unless the partition is mounted unionfs or async, which would be very unlikely. So the most likely scenario is that some data was deleted somehow (either a mistake in your code or an SQL injection attack), or some data was stored in memory but not actually committed to the database.

I agree with NJ5 about this not likely to be a hardware issue, but not knowing much about the problem makes it difficult to determine the cause.

Yeah, I'd agree with all of that. Servers are pretty much designed to either not lose data - or if data is lost, then a minimal amount is lost.

Only other possibility I can think of, is physical disc (sector) corruption - which wouldn't be likely. And hopefully any server would be configured in some RAID layout.

I'm definitely no expert - but I have never lost any data on a mySQL database (driven through PHP 4 I think), that wasn't my fault.

...

If the data loss is restricted to users only - I would suspect some dodgy data might have made its way into the database, caused some form of error on an iterative process - and caused a truncate (or something similar).

I also wouldn't be surprised if the site was under some form of DOS attack. 

 



Gesta Non Verba

Nocturnal is helping companies get cheaper game ratings in Australia:

Game Assessment website

Wii code: 2263 4706 2910 1099