Opened 12 years ago
Closed 12 years ago
#3988 closed Bug (duplicate)
by default, transmission should use normal 'async' writes
Reported by: | Astara | Owned by: | |
---|---|---|---|
Priority: | Normal | Milestone: | None Set |
Component: | Transmission | Version: | 2.20 |
Severity: | Normal | Keywords: | performance, synchronous, asynchrous, I/O, optimization, configuration |
Cc: |
Description
Unless the disk writes have their own thread that enqueue disk write requests so that they do not block, transmission is unnecessarily slowing itself down by using synchronous writes when they are not requested.
Usually, if a user wants synchronous writes they can mount the disk to given them such. There are few circumstances where you really want or need synchronous writes. Even when a torrent is moved, the move(s) can, 99.9999% of the time be done asynchronously followed by a 'flush' to ensure they are written out Before unlinking or removing the source files.
In normal operation, though, in writing received data to disk, there is no reason transmission should be using synchronous writes. They simply slow things down and force transmission to hang and wait for I/O to be complete.
If you feel this is necessary, add it as an 'option', but it's not something most users will want or need.
Right now, I'm running latencytop, and the top program causing latency is almost always transmission -- with latency ranges averaging around 100ms/second, but often ranging over 200ms. Other operations not so much under programmatic control, are writing a page to disk (~20-40ms) and waiting for an event in epoll (3-4ms) and wait for event in select( < 1-2ms).
But at the top of the pile is almost always writing a buffer to disk 'synchronously'.
I notice that the average has dropped to between (40-60ms) when nothing else is on the disk. Seems like waiting around for 5-20% of the time doing nothing might have an impact on performance.
In general, normal user applications have no need for synchronous I/O -- if you were a database application used by a bank or such, then it might be warranted, but in this case, it's a waste and can keep the OS from doing the right thing in handling space allocation, scheduling I/O and merging adjacent/consecutive I/O requests.
Besides the impact on transmission, this could easily explain some of the problems I've seen when running audio-video applications where there is much greater sensitivity to higher latency.
Seeing transmission generating over 200ms of latency is ridiculous -- admittedly other disk I/O was going on at the time, but none of it was blocking -- only transmission was blocking and was the only one showing up on latencytop.
That definitely would cause major harm to any user audio and video applications where latency is a sensitive issue. Transmission should, IMO, never be causing or adding to latency -- it's not activity that should be blocking other I/O, but that's exactly what it is doing.
Note that the disk that it is writing to is a *RAID* disk where you can perform I/O at speeds in excess of 400MB/s.
This has bad implications for most users who won't be reading/writing to disks with anything near that throughput -- so even without external I/O load, a laptop user could easily see transmission interrupting their ability to play back music or video unnecessarily.
:-(
Change History (7)
comment:1 in reply to: ↑ description Changed 12 years ago by jordan
comment:2 Changed 12 years ago by Astara
I thought it might be might mount options...I diddled them and it dropped when transmission first started. It seems like it might be less, but still jumps up to 50ms every so often. But now I also killed off every other process that had that disk open (editors holding open files shouldn't have generated I/O).
A call backtrace (may not be the same each time) shows:
sync_buffer __wait_on_buffer __block_prepare_write block_write_begin_newtrunc block_write_begin generic_file_buffered_write xfs_file_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
It may be that the entry point is from the bottom, with the trace going 'up', but that's a guess.
Is there a pwrite call? How's that different than write? Isn't it just "write w/a location" ? I wonder if there's something in it that requires it be synchronous?
comment:3 Changed 12 years ago by jordan
Well wait, you're jumping too many steps ahead. a glibc write() call on Linux will wind up calling sys_write(), just as a glibc pwrite64() call on Linux will wind up calling sys_pwrite64(). So in both cases you wind up calling vfs_write(), which calls do_sync_write(). So I think pwrite() and write() would behave pretty much the same way.
Or did you mean we should use aio_write() instead of write() or pwrite()?
comment:4 follow-up: ↓ 5 Changed 12 years ago by Astara
Gosh, I wasn't sure when I wrote this what you were using -- I was just looking at the results in the 'latencytop' -- are you familiar with the tool? It's in a package on my suse system called 'latencytop'. I remember the discussion on the linux kernel mailing list when it was being developed -- there were lots of unexplained 'drops' in people's various 'real-time' (usually audio-video related) stuff. Alot of work went into removing the 'big lock' in the kernel, and making it with many more "yield" points (like the MSWindows design, that only let go of control when it wanted to, which is why it did (some parts still do) hang and lock up everything. But besides the increase in the 'voluntary' preemption points, it was also made fully preemptable.
If latency top isn't in your distro you can probably build it from source from its website @ www.latencytop.org.
I'm not sure where this is being called, BUT the only writes that are occurring regularly are the writing out of downloaded pieces.
It's an area that bothers me a bit, not because it's a problem for me now, but I'd like to see transmission be able to 'auto-optimize' itself for the machine it is one. More than one place decisions have gone in to slow down transmission, thinking that the performance hit can be hidden or won't be noticed. There's no such thing as a free lunch. Just because users may not notice doesn't mean it is still not slow -- Microsoft has been king at managing 'perceptions', but Win7 is still a bloated pig. They talk about it booting faster -- faster than what? Vista. Which was 3-5x slower than XP. They talk about the speedups.. and how much it has improved over Vista, but, unfortunately, they are still 15-20% (that's being generous) than WinXP.
They deliberately crippled WinXP to force people to move to Win7 if they wanted more memory (32-bit WinXP's physical limit is 64G on today's hardware). There was no need to create Win7 other than to satisfy the DRM requirements of Hollywood (the 'NBC' of MSNBC)... As a result they added an entire new virtualization layer over all of the hardware with the result that many things 'close to the hardware' broke -- and not because they were supporting new hardware -- but all to limit the user's access to their own hardware!
Tests that are missing out of transmission's suite -- besides the 10-100K torrent / 1 tracker, are ones that measure it's max high-speed througput in downloading a file from 1, or 10-20 clients. Do we get close to HTTP speed or is it more like 10%? How fast can people download from a T-client? How do we compare to other network transport layers?
Sorry to get off track, but they are related -- without knowing the impact of these latency points its hard to say what the best solution would be.
Transmission doesn't do alot of file deletes, but as an example, because a cache manager like squid does, it actually has an optional, dedicated, multi-threaded(multi-worker) delete daemon for those environments that are being hard hit with delete penalties.
aio doesn't give you anything (IMO) that you can't get with your own background I/O daemon(s) to handle manage work-queues/disk device.
comment:5 in reply to: ↑ 4 Changed 12 years ago by jordan
Replying to Astara:
aio doesn't give you anything (IMO) that you can't get with your own background I/O daemon(s) to handle manage work-queues/disk device.
So this ticket's issue is the same as the "worker IO thread to ease the load in libtransmission/libevent thread" ticket?
comment:6 Changed 12 years ago by Astara
*probably* -- I'm saying that with some hesitation, since I don't know for sure if aio might create some benefit -- and I don't know why pwrite would do a synchronous write when it's not opened with O_SYNC. That bothers me, but for now, you can probably roll this under the other bug.
comment:7 Changed 12 years ago by jordan
- Resolution set to duplicate
- Status changed from new to closed
Closing as a duplicate of #3988
Replying to Astara:
The plan is for #1753 to address this.
What synchronous writes are you referring to? I'm not sure what you mean. Is there a way to tie the output of latency top back to specific lines of code?