If you put the following code in your compiler the result is a bit bizar:
decimal x = (276/304)*304;
double y = (276/304)*304;
Console.WriteLine("decimal x = " + x);
Console.WriteLine("double y = " + y);
Result:
decimal x = 275.99999999999999999999999
double y = 276.0
Can someone explain this to me? I don't understand how this can be correct.