Opened 12 years ago

Closed 11 years ago

#1631 closed Bug (fixed)

Announce stays "in progress"

Reported by: charles Owned by: charles
Priority: Normal Milestone: 1.80
Component: libtransmission Version: 1.51
Severity: Major Keywords:
Cc:

Description (last modified by charles)

At some point all the pending announces experience a slowdown and can take a long time to complete. This can cause poor performance (or no performance!) if Transmission is unable to complete the announces to get a list of peers from the tracker.

Sometimes this will happen right after starting Transmission 1.42; sometimes it doesn't happen at all -- I haven't figured out the trigger yet.

This seems to be new in 1.4x. My *guess* is that it comes from ticket #1370

Change History (22)

comment:1 Changed 12 years ago by charles

  • Resolution set to fixed
  • Status changed from new to closed

comment:2 Changed 12 years ago by charles

  • Resolution fixed deleted
  • Status changed from closed to reopened

comment:3 Changed 12 years ago by charles

It seems like during the slowdown period, all we get is

[03:34:12.119] web libevent timer is done (web.c:334)
[03:34:12.119] web check_run_count: prev_running 50, still_running 50 (web.c:281)
[03:34:12.119] web event_cb(): fd -1, mask 0, still_running is 50 (web.c:287)

the last log message to differ from this pattern is

[03:34:06.037] web libevent timer is done (web.c:334)
[03:34:06.037] web check_run_count: prev_running 50, still_running 49 (web.c:281)
*   Trying w.x.y.z... [03:34:06.037] web sock_cb: what is 2, sockinfo is (nil) (web.c:388)
[03:34:06.037] web creating a sockinfo 0x7fdff0675320 for fd 108 (web.c:373)
[03:34:06.037] web setsock: fd is 108, curl action is 2, libevent action is 20 (web.c:359)
[03:34:06.037] web deleting the pending global timer (web.c:227)
[03:34:06.037] web adding a timeout for 60 seconds from now (web.c:237)
[03:34:06.037] web event_cb(): fd -1, mask 0, still_running is 50 (web.c:287)
[03:34:06.037] web deleting the pending global timer (web.c:227)
[03:34:06.037] web adding a timeout for 60 seconds from now (web.c:237)
[03:34:06.037] web deleting the pending global timer (web.c:227)
[03:34:06.037] web adding a timeout for 2 seconds from now (web.c:237)

comment:4 Changed 12 years ago by charles

Experimental code added for this ticket to trunk in r7517

comment:5 Changed 12 years ago by charles

probable fix committed to trunk in r7536.

comment:6 Changed 12 years ago by charles

  • Description modified (diff)

comment:7 Changed 12 years ago by charles

Nope, r7536 didn't fix things either, although it now takes longer for the problem to manifest. Now I have to run the program for 12 hours or so before the problem might or might not occur, so testing cycles are slow. :/

comment:8 Changed 12 years ago by charles

r7557 seems to be faring better...

comment:9 Changed 12 years ago by charles

  • Resolution set to fixed
  • Status changed from reopened to closed

I haven't seen this at all since r7557.

comment:10 Changed 12 years ago by charles

  • Milestone changed from 1.43 to 1.50

comment:11 Changed 12 years ago by charles

  • Severity changed from Major to Critical

comment:12 Changed 12 years ago by rb07

  • Milestone changed from 1.50 to None Set
  • Resolution fixed deleted
  • Status changed from closed to reopened
  • Version changed from 1.42 to 1.51

Its back!

Using release 1.51 (7970) transmission-daemon is again getting stuck:

$ transmission-remote -n admin:password -t 34 -i ... TRACKER

Latest announce: Sat Mar 7 14:02:35 2009 Announce URL: http://www.tb.bz:2710/c/announce Announce response: OK (200) Next announce: In Progress Latest scrape: Sat Mar 7 15:04:55 2009 Scrape response: OK (200) Next scrape: In Progress Tracker knows of 189 seeders and 2 leechers Tracker has seen 1037 clients complete this torrent

... ~ $ date Sat Mar 7 17:43:37 CST 2009


I'm seeing the same problem with 2 different trackers, the daemon never renews the session with idle torrents.

comment:13 Changed 12 years ago by rb07

Sorry for the bad format, let's try again the log part:

$ transmission-remote -n admin:password -t 34 -i
...
TRACKER
  Latest announce: Sat Mar  7 14:02:35 2009
  Announce URL: http://www.thebox.bz:2710/8ff46adf0433dbca562264df8ffd318c/announce
  Announce response: OK (200)
  Next announce:   In Progress
  Latest scrape:   Sat Mar  7 15:04:55 2009
  Scrape response: OK (200)
  Next scrape:     In Progress
  Tracker knows of 189 seeders and 2 leechers
  Tracker has seen 1037 clients complete this torrent
...
~ $ date
Sat Mar  7 17:43:37 CST 2009
-----------^^^^^^^^  5 hours "in progress" !?

comment:14 Changed 12 years ago by rb07

My report could be innacurate, I just saw in the tracker's Web site that they had a power failure Saturday from 2am to 8am (UK time)... I'm 6 hours behind, so it was 8am to 2pm my time... if I read this right transmission recovered from the tracker disappearing 6hr, made one announce read, a few scrapes then lost it.

But as I said (2 posts back) I saw the problem with 2 different trackers. In fact one reports right now that I'm not seeding anything, and I have 4 torrents seeding.

The daemon shows (same sample as above, restarted 2 days ago):

TRACKER
  Latest announce: Sun Mar  8 10:09:02 2009
  Announce URL: http://www.tb.bz:2710/announce
  Announce response: OK (200)
  Next announce:   In Progress
  Latest scrape:   Sun Mar  8 12:56:30 2009
  Scrape response: No Response (0)
  Next scrape:     In Progress
  Tracker knows of 183 seeders and 1 leechers
  Tracker has seen 1038 clients complete this torrent

Its over a day since last announce/scrape.

comment:15 Changed 12 years ago by rb07

Oops! wrong time calculation, the outage was from Friday 8pm to Saturday 2am my time. Transmission then did work fine for 12 to 13 hrs, then lost it.

comment:16 follow-up: Changed 12 years ago by livings124

I'm confused. Are you saying this is an issue or isn't?

comment:17 in reply to: ↑ 16 Changed 12 years ago by rb07

Replying to livings124:

I'm confused. Are you saying this is an issue or isn't?

It is. Otherwise I wouldn't leave this ticket reopened.

The point is: the daemon should show an error condition when announce/scrape takes too long, and really try to reconnect afterward.

Why do I say "really"? Because if I stop-and-start the torrent it does connect. Example:

TRACKER
  Latest announce: Tue Mar 10 11:46:33 2009
  Announce URL: http://www.tb.bz:2710/...318c/announce
  Announce response: OK (200)
  Next announce:   In Progress
  Latest scrape:   Tue Mar 10 12:16:48 2009
  Scrape response: OK (200)
  Next scrape:     In Progress
  Tracker knows of 140 seeders and 0 leechers
  Tracker has seen 1056 clients complete this torrent
~ $ date
Wed Mar 11 10:49:01 CST 2009

Almost a day "in progress". Then after a stop-and-start:

$ date
Wed Mar 11 11:27:00 CST 2009
~ $ transmission-remote -n admin:password -t 34 -S
localhost:9091 responded: "success"
~ $ transmission-remote -n admin:password -t 34 -s
localhost:9091 responded: "success"
~ $ transmission-remote -n admin:password -t 34 -i
...
TRACKER
  Latest announce: Wed Mar 11 11:27:33 2009
  Announce URL: http://www.tb.bz:2710/...318c/announce
  Announce response: OK (200)
  Next announce:   Wed Mar 11 11:57:34 2009
  Latest scrape:   Tue Mar 10 12:16:48 2009
  Scrape response: OK (200)
  Next scrape:     Wed Mar 11 11:42:19 2009
  Tracker knows of 128 seeders and 1 leechers
  Tracker has seen 1056 clients complete this torrent
...
$ date
Wed Mar 11 11:27:59 CST 2009

It took no time to get that announce and scrape.

comment:18 Changed 11 years ago by charles

Ticket #1770 has been marked as a duplicate of this ticket.

comment:19 Changed 11 years ago by charles

  • Summary changed from Announce is "in progress" & takes a long time to complete to Announce stays "in progress"

comment:20 Changed 11 years ago by charles

  • Milestone changed from None Set to 1.74
  • Severity changed from Critical to Major

comment:21 Changed 11 years ago by charles

  • Milestone changed from 1.74 to 1.80

oops

comment:22 Changed 11 years ago by charles

  • Resolution set to fixed
  • Status changed from reopened to closed

fixed in trunk by r9171 for 1.80

Note: See TracTickets for help on using tickets.