Bug 15105
Summary: | i2c-tiny-usb fails on big endian machines | ||
---|---|---|---|
Product: | Drivers | Reporter: | Jens Richter (jens) |
Component: | I2C | Assignee: | Jean Delvare (jdelvare) |
Status: | RESOLVED CODE_FIX | ||
Severity: | normal | ||
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 2.6.22 and later | Subsystem: | |
Regression: | No | Bisected commit-id: | |
Attachments: | Proposed fix |
Description
Jens Richter
2010-01-22 18:41:16 UTC
Jens, thanks for reporting. I agree with you for the first issue (func), although I think the type of the "func" variable could additionally be changed to __le32 for clarity. For the second issue, I'm not so sure. While "delay" is a 32-bit variable, it is ultimately passed to usb_control_msg() as a __u16, so cpu_to_le16(delay) as we have today makes more sense to me than cpu_to_le32(delay) as you propose. Maybe it would all be clearer is delay itself was changed to an unsigned short. Also, if the "value" parameter of usb_write() must be forced to little-endian for command CMD_SET_DELAY, isn't it strange that no byte-swapping is needed for other use cases of usb_write() or even usb_read()? This doesn't seem consistent. The only other command passing a non-zero value is CMD_I2C_IO, it passes the message flags but I am not quite sure it really needs to do anything with them, so it's hard to tell if that part of the code is correct or not. I am wondering if the byte-swapping should happen in usb_write() and usb_read() instead. One would need to take a look at the device's firmware to make sure. The author of the driver, Till Harbaum, would probably know... I have asked for his help. I don't have any big-endian machine here so I can't test. Hello Jean, I've tested it on a platform with IXP425 setup up with CONFIG_BIG_ENDIAN. I also tried to define 'delay' as a short first. This wasn't successful. I'll look forward to Till's recommendations. I start wondering if byte-swapping for delay is really needed. After all, it is passed as a USB command parameter and not a data field. If any byte reordering was needed, I'd expect the usb stack to take care of that. So maybe the right fix is to drop the byte-swapping altogether. I'll attach a patch, please give it a try. Created attachment 24717 [details]
Proposed fix
/* This is the actual algorithm we define */ @@ -216,8 +217,7 @@ static int i2c_tiny_usb_probe(struct usb "i2c-tiny-usb at bus %03d device %03d", dev->usb_dev->bus->busnum, dev->usb_dev->devnum); - if (usb_write(&dev->adapter, CMD_SET_DELAY, - cpu_to_le16(delay), 0, NULL, 0) != 0) { + if (usb_write(&dev->adapter, CMD_SET_DELAY, delay, 0, NULL, 0) != 0) { dev_err(&dev->adapter.dev, "failure setting delay to %dus\n", delay); retval = -EIO; The proposal to remove 'cpu_to_le16' is the key. To define 'delay' as an integer or short doesn't matter. It works both, on big and little endian machines. I tested it on a x86 and an ARM platform. Dft. clock rate 50kHz measured. Did you say 50kHz? The source code says: /* i2c bit delay, default is 10us -> 100kHz */ Is this wrong? Anyway, thanks for reporting and testing. Fix is on its way to Linus and stable. The measured 50kHz are according to the description from Till Harbaum on his homepage: ... The i2c-tiny-usb provides a software adjustable i2c clock delay allowing to configure the i2c clock. The default delay is 10us. Due to additional delays in the i2c bitbanging code this results in a i2c clock of about 50kHz. ... OK, I'll update the comment in the source code then, thanks. I'll send the fix for endianness bugs upstream today. Fixed in 2.6.33 and 2.6.32.9. http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=1c010ff8912cbc08d80e865aab9c32b6b00c527d Fix pending for the 2.6.27 stable series. |