There are no changes in the Mac OS X specific components for this release. See earlier blog posts for previous changes.
Download NTFS-3G 1.2712 [stable]
Download NTFS-3G 1.2712 [ublio] (patched for improved performance)
Packaging, patching, some OS X-related development and testing has been done by Catacombae Software (i.e. myself).
Requirements: Mac OS X 10.4/10.5, a PowerPC or Intel computer, MacFUSE 1.3 or later installed (1.7 recommended).
This package has been tested with OS X 10.4.11/Intel and OS X 10.5.4/Intel.
Known issues:
- Files with filenames created in Windows containing international characters with accents, umlauts and similar dots and lines, or filenames with korean characters might seem unreadable in the Finder. This is because Finder apparently expects all filenames to appear in unicode decomposed form, while NTFS allows both composed and decomposed form filenames. This issue is hard to solve in a pretty way, but you should still be able to access these files when using the Terminal. For me, copying the affected files to a HFS+ drive using the command "cp" worked fine.
- After installing ntfs-3g, all NTFS drives will disappear from the "Startup Disk" preference pane. Disabling or uninstalling ntfs-3g brings them back. I don't have a solution for this, but you can still choose your startup drive by:
- Holding down the Option key during boot (or Alt for non-Apple keyboards).
- Intel users only: Install the rEFIt boot manager for better control of the boot process.
- Using the command line utility bless (see man bless for more information)
Sources:
ntfs-3g 1.2712 (patched)
ntfsprogs 1.13.1
fuse_wait.c
ntfs-3g_daemon.c
Hi,
ReplyDeleteI'm using OSX 10.5.4 PPC, MacFuse Core 1.7.1 and this NTFS-3g build (with ublio). During copying to an external usb drive I get -36 and -43 errors at random (same with the previous NTFS-3g build). Most times a second try works, but nevertheless it's quite annoying. Any ideas on that behaviour? The NTFS file system is fine (checked several times on XP) and the errors even occur on a freshly formated file system.
This problem occurs since I upgraded from MacFuse 1.3.1 and an older NTFS-3g version (if my memory doesn't deceive me), but that combination caused some kernelpanics, so the present situation is an improvement, nevertheless.
Anonymous, July 18, 2008 11:09 AM:
ReplyDeleteYes, unfortunately I know the issue. I have experienced it myself from time to time.
It seems to be caused by a buggy USB driver from Apple. I have been talking to Szaka (main ntfs-3g developer) about it, and he had received these reports earlier. In my experience this only happens with OS X 10.4, and not 10.5, but I could be mistaken.
If you're saying that these errors are dependent on MacFUSE version and/or ntfs-3g version, I'd like you to check if the problem occurs with MacFuse 1.3 and the latest version of the driver.
Also, please put some sort of signature on your posts so I can address you in better way than "Anonymous, July 18, 2008 11:09 AM". :)
Hi Erik,
ReplyDeletethat was fast!
First I have to correct one thing (sorry!), I use FW to connect the disk (WD MyBook Edition, 500GB, 2 partitions, 15 GB NTFS, rest ext2) to my Mac (MacMini G4 PPC, 1.2GHz, OSX 10.5.4, fully patched). ATM I use MacFuse Core 1.7.1, NTFS-3G 1.2712 ublio (errors -36 and -43).
Before the upgrade I used MacFuse Core 1.3.1 and NTFS-3g 1.2216 stable (maybe that's relevant). I didn't get said errors (at least not so often that I could recall them), but sometimes a Kernelpanic.
I upgraded to the newer Fuse/NTFS-3g after the update to 10.5.4, so maybe the Apple update might be related to the problems too.
I can't recall if there were many errors when I used 10.4, sorry 'bout that.
I think before going back to 1.3.1 (I'm not eager for Kernelpanics ...) I will switch of the ublio layer with the script provided with the dmg file.
Tom:
ReplyDeleteI get comments right into my mailbox, so I can reply (and delete spam) pretty quickly. :)
Let me know how the behavior is with ublio turned off. I think ntfs-3g works pretty fast in OS X 10.5 even without ublio...
Also note the transfer speed when copying a file (that is, if it's noticeably faster/slower or roughly the same... you don't have to give me exact figures).
The thing is that while this problem has been seen with USB drives, and now FireWire drives, I have never heard any report on any problems with internal (IDE/SATA) drives. This has lead me to believe that there is some bug related to accessing raw disk devices and the USB driver. This assumption does not seem to hold as it now occurs with FireWire too.
I don't think this issue has anything to do with recent OS X updates.
I will test this during the next days and report back.
ReplyDeleteThanks for your help!
Tom:
ReplyDeleteJust to ensure there are no misunderstandings:
Make sure that you still use the same build, the ublio build, but with ublio disabled. The stable build has always worked fine in this regard.
If you're unsure, execute the command "/usr/bin/ntfs-3g --help" in the Terminal and check the first lines for the message about USE_UBLIO and USE_ALIGNED_IO, which indicates that you're using the ublio build.
Thanks for helping out with testing. :)
Error -36 is 'I/O error" and the -43 is typically a consequence of this one. They are documented on http://developer.apple.com/documentation/Carbon/Reference/File_Manager/Reference/reference.html#//apple_ref/doc/uid/TP30000107-CH5g-RCM0037
ReplyDeleteThe most common reasons for the 'I/O' errors are documented on http://ntfs-3g.org/support.html#ioerror
Especially check out WD MyBook part and the referring links there. Many people are complaining about dying WD MyBook drives. Not only on OS X but on Linux and Windows too. It seems to be a serious disk manufacturing error coupled.
Hi,
ReplyDeleteI just did some rudimentary testing. I copied a folder with 10GB content to the FW disk (ublio enabled), everything went fine. I copied another folder (650MB), no error again. The third folder however (670MB) produced a -43 error halfway through the process. I deleted the folder, tried again and got a -36 error (deleted the folder again). Then I unmounted the device, run the "disable caching" command and remounted the disk. Then I copied the third folder again, no errors this time. The data transfer with ublio enabled is way faster, something around 70% (depending). CPU usage with ublio enabled for ntfs-3g is in the range 16% to 20%, when ublio is disabled it sometimes drops to 10%, but mainly stays in the 16% to 20% range too. I hope that helps a little bit.
I don't think it's a hardware error related to the WD MyBook drive, as these errors happen only when accessing the NTFS fs with NTFS-3g, not on XP with native support, nor on linux when accessing the ext3 partition. The internal hdd in my mac doesn't show any errors too and I run S.M.A.R.T. tests on a regular basis.
Tom: Check your logs for the reason for the I/O error. It will tell if it's hardware related or not.
ReplyDeleteAlso, if the IO errors happen at random time then it's quite probably hardware error. If you can reliable reproduce the same error (e.g. when coping the same directory then the error always happen exactly at the same place) then it's probably a software error unless you have some media/hardware faults on a certain used place on the disk.
Errors during higher CPU usage are also a sign of hardware error because it stresses the hardware better.
An ublio related, big-endian (PPC) bug is also possible since I think not many people are having it. Otherwise there would be much more such bug report.
Hi,
ReplyDeleteI did some more testing this morning, with ublio disabled I copied approx. 13GB of data to the disk without any error (albeit with a lower transfer rate).
Is it possible that the ublio enabled NTFS-3g is prone to timeouts, i.e. if the source hdd has a high load (many files open for reading/writing) and delivers data to slowly ublio enabled NTFS-3g reckons that as an error?
In my opinion it's either related to that or a freak PPC/ublio bug not reported yet.
When ublio is enabled then data is cached in memory and the disk can power down during this time and then the OS can indeed report an IO error later on.
ReplyDeleteThis is one of the most common device driver and disk problem with some version of Seagate and WD disks. That's why I suggested you to check the log files because usually OSes report such errors, like the ones listed on http://ntfs-3g.org/support.html#ioerror
Hey ... just switching all my data from an NTFS drive to an hfs drive. (also an Western Digital MyBook as above) apart from cpu usage always being insane (though i guess it's ok if you don't use it regularly) I have some weird behaviour. I copy some folder (let's say 40 G, some less, some more) and afterwards I check the file/folder numbers and the size. Sometimes there's wuite a difference (at least 300 -400 MB, 20-40 files or so) Any idea how I can make sure all file are there?
ReplyDeleteSidenotes: Maybe I'm using one version older than this, have to check. Also, you might say "Why not use apples driver for copying?" Well, turns out apples driver sees on of the folders only as a file, not as a folder. I checked the drive multiple times under windows, it's fine, also, I did not knowingly enable compression/encryption or stuff like that.
Hardware: This is my first mac, (Mac Book air) so I'm not sure, but hardware wise it should be more than able to copy files, even with the ntfs driver, right? This of course means that both drives are connected to the same usb port. Could this cause an issue? I really just wanna finish moving my files to hfs cause ntfs-3g is eating cpu cycles for breakfast ... any help greatly appreciated.
Update: Checked and I should have the latest version.
ReplyDeleteflo:
ReplyDelete"Insane" CPU usage is *not* normal, and indicates that something is out of the ordinary. ntfs-3g very rarely eats more than 10% CPU, and that's in extreme load situations tested on my first revision MacBook (less powerful than your equipment).
You didn't specify which build you are using... stable or ublio. The hardware specifications of a MacBook Air are quite well known so there's no point in asking about that, but what OS X version, MacFUSE version are you using?
Are you exclusively using USB2 as the connection method between the WD drive and your computer?
Also it seems strange that OS X's NTFS driver sees your folder as a file. Maybe it's because of unsupported features such as reparse points, junctions, symlinks... something like that.
Hey thanks for the reply.
ReplyDeleteBuild used: stable
MacFuse Core: 10.5-1.7.1 (I think)
OS version: OS X 10.5.4 with all latest updates
Maybe I should refine the part about cpu usage: If I connect the 2 drives and let them do stuff, and generally try to do something else the system gets really sluggish. (by something else I mean Firefox, maybe itunes, so no cpu intensive stuff) Then when I look at activity monitor, ntfs-3g isn't always on top, and it doesn't totally flood (for lack of a better word) the cpu, but gets at least up to 40/60 % i think sometimes.
Connection: Yes, I'm connecting it exclusively over the usb port, cause the mac book air lacks a firewire port (always used that on my old windows notebook) and has only one usb port.
About the os x driver: Yeah sure, I know it's strange ... you know how I might be able to check for that? I have access to a windows systems, but that doesn't really help for copying data, because win doesn't know hfs and wifi is way too slow.
flo:
ReplyDeleteThe sluggishness may be due to the USB implementation in OS X. USB transfers always puts a lot higher burden on the system than FireWire or the internal/external SATA/IDE interfaces.
This might get amplified by the way ntfs-3g does I/O... I should do some performance testing to see if that's the case.
If you could provide me with a metadata image of the drive with that directory structure that won't copy properly, I could do some testing on my own. ntfsclone, which is included in this package, has this functionality. Read the man page ("man ntfsclone") for more info.
As a sidenote, which is not going to help you with this particular problem, I have written a piece of software called HFSExplorer ( http://hem.bredband.net/catacombae/ ) that provides a free way of accessing your files (read-only) on a HFS+ drive from within Windows (or other OS's).
O.k., I will try to get you the metadata image. Ran into the following problem. I always get an error like this "ERROR(28): ftruncate failed for file '/Volumes/Media/ntfsmeta.img': No space left on device
ReplyDelete" There is like 70 GB free on that device, unfortunately source volume is more like 250 GB. So am i guessing correctly that it checks for available space before it actually does anything, and needs the actual space taken up the data, even if it only reads zeros? I tried piping through stdout to bzip2, but it says that's not supported.
command used (for normal output, not piping): "/usr/local/sbin/ntfsclone --metadata --output /Volumes/Media/ntfsmeta.img /dev/disk2s1"
It would be quite a hassle to free up 250+ GB, so if you can think of something else or tell me if I used the wrong command? Thanks in advance, florian
flo:
ReplyDeleteYes, uhm... that's because ntfsclone depends on sparse files, which isn't supported by HFS+. You'd be better off storing the metadata image on another NTFS-3G mounted file system, if possible, since NTFS-3G supports sparse files.
Thanks so far, that seemed to work at first, but at 9.7 percent it stopped with an error:
ReplyDelete"ERROR(57): Write failed: Socket is not connected"
A big enough file had been created on the additional ntfs-3g drive, but because it exited with an error I deleted it and tried again, which led to "ERROR(20): Couldn't access '/Volumes/Backup/ntfsmeta.img': Not a directory" This is kinda strange, cause I can access the drive with finder, just when try to ls its contents in terminal, it gives an error like "ls: backup: Device not configured"
After unmounting and remounting though, I could repeat the above, but with the same results. I tried compressing the incomplete file but got the "bzip2: Can't open input file /Volumes/Backup/ntfsmeta.img: Not a directory." error.
The drive this was copied on should be fine, it was checked on a windows machine before. So, any ideas on this?
flo:
ReplyDeleteThat sounds strange. Perhaps you just found a bug in ntfs-3g? I'll do some testing on my own and then I'll get back to you.
flo:
ReplyDeleteOkay, I can reproduce. Something is fishy about NTFS-3G sparse files. This will be investigated...
In the meantime, this is my workaround:
An alternative choice for an alternative file system which supports sparse files is ZFS, which now works on OS X.
Go to the ZFS download page and grab the latest binaries, and follow the instruction on that page.
Now, go to Disk Utility and create a normal read/write disk image for holding the ZFS file system (on any drive). For my 60 GiB NTFS file system the sparse metadata image needed about 1.5 GiB of actual space. I made a 10 GiB disk image to be sure.
You can't initialize the disk image with ZFS in Disk Utility (at least I couldn't), so make it a HFS+ image, and then go to the Terminal to put the last bits together:
> sudo diskutil partitiondisk /dev/diskX GPTFormat ZFS %noformat% 100%
(diskX replaced by the disk id of your disk image... don't make a mistake here!)
> sudo zpool create ZFSSparseTest /dev/diskXs2
(if the ZFS partition is number 2, which it was for me... partition 1 became an EFI partition)
Now you should have /Volumes/ZFSSparseTest mounted and available. Store the metadata image there (worked for me). When ntfsclone is finished, bzip2 the sparse metadata image. I ended up with 14 MiB of bz2 compressed data from the 60 GiB (1.5 GiB actual content) sparse file.
Good luck.
bzip2ing now ... takes ages.
ReplyDeleteOT: You or anyone know whether activity monitor's cpu graphs (for example the 2 bars you can set it to show in the dock) accurately depict cpu load across cores? Because that would explain a lot of sluggishness sometimes, if processes are delegated to one core only.
Ok, the work around with zfs worked. I uploaded the bz2 to rapidshare, hope that's ok.
ReplyDeletehttp://rapidshare.com/files/133704932/ntfsmeta.img.bz2.html
It's only 4 MB now, but most files are rather big, so there's not that much metadata maybe. Also took really long, so the archie better be good :)
Would be nice if you could keep me updated about your investigation with the archive.
flo:
ReplyDeleteI'll check it out. The rapidshare way worked out fine, although if your NTFS tree structure contains any information that you wouldn't want to share with the whole world, well... you just did. :)
Could you specify which folder the internal NTFS driver sees as a file, and give an example where content got lost while copying, so I know what to test?
Yeah, I know. But I don't think there's anything interesting in there.
ReplyDeleteFolder: /Flo/flo/
Copied that to a HFS+ partition (with finder), then checked file/folder counts and computed total size. (also work for most subirectories, though not all subirectories I think, but most) So I can't really tell what got lost (which is my problem) I just know file counts and sizes are different. Though I tried to use a tool like unsion, and it finds differences, but using it for the complete folder structure might make the mac unusable for days.
Hey guys,
ReplyDeleteJust wanted to let you know that even if all NTFS drives are removed from the startup prefpane, using a bootmenu like rEFIt http://refit.sourceforge.net/ will allow you to boot into Windows anyways.
Hope that helps.
Phil:
ReplyDeleteYou don't even need rEFIt, you can just hold the option key at startup.
ntfs-3g 1.2812 released
ReplyDeleteRaymond:
ReplyDeleteI've had a hard disk crash on my mac, so I'm in the process of restoring my system to a new drive.
I may update the package later today, if I'm up and running by then.
NTFS-3G for Mac OS X 1.2812 has been released.
ReplyDelete