There are many variables in the dynomometers, operators, engines, transmissions, gear ratios, correction factors, tire air pressure, fluid temps, ect.. For instance, when was the load cell last calibrated on the dyno, since all it measures is force, or torque, and hp is calculated. Which correction factor was used? STP, standard temp and pressure (68 deg f? and 29.92 "HG?) is usually used in the aftermarket, and is a much higher relative air density than that used for correcting power that auto manufacturers use. This higher rel. air dens. of STP shows several percent more power than those used by auto manufacturers. The more sophisticated formulas used by the manufacturers account for engine friction, and if it's not measured the standard default value is plugged into the formula. To be accurate, the correction factor should be applied to the indicated hp, not brake hp. Subtracting engine friction from the ihp yields bhp.
How accurate were the instruments, and where were they located when measuring the air pressure and temp the engine was actually ingesting? Humidity is another variable.
When I worked at Ford's certification test lab they spent HUGE money on trying to make chassis dyno emissions cells correlate with each other. They would take a correlation vehicle and drive it on different dynos, but even the correlation vehicle was a variable, usually getting more efficient as the miles accumulated. Also, the driver could give you almost any emissions numbers the engineer wanted, without violating the drive trace.
It's quite possible the 90 hp at the wheels is correct, but it could be more, could be less. I've not heard of manufacturers rating the power at the wheels. Wheel hp changes according to the trans and which gear is selected. Transmissions are typically most efficent near the 1:1 ratio.
In summary, the same vehicle tested in many different facilities will yield many different results.