Opened 3 years ago

Last modified 3 years ago

#6110 new Bug

Potential Denial of Service in web.c

Reported by: x190 Owned by: jordan
Priority: High Milestone: None Set
Component: libtransmission Version: 2.92
Severity: Normal Keywords:
Cc:

Description

A malicious server could cause libcurl to download an infinite amount of data, potentially causing all of memory or disk to be filled. Setting the CURLOPT_MAXFILESIZE_LARGE option is not sufficient to guard against this. Instead, the app should monitor the amount of data received within the write or progress callback and abort once the limit is reached.

A malicious HTTP server could cause an infinite redirection loop, causing a denial-of-service. This can be mitigated by using the CURLOPT_MAXREDIRS option. [NOTE: We are currently using 'Set it to -1 for an infinite number of redirects.', so we are vulnerable to this attack.]

See 'Denial of Service' section:
https://curl.haxx.se/libcurl/c/libcurl-tutorial.html

Attachments (1)

curl-6110.zip (4.7 KB) - added by cfpp2p 3 years ago.

Download all attachments as: .zip

Change History (12)

comment:1 Changed 3 years ago by cfpp2p

We're using:
curl_easy_setopt (e, CURLOPT_TIMEOUT, task->timeout_secs);
I think that would stop this. Have you tested?

comment:2 Changed 3 years ago by x190

Read the source material. I believe you are wrong! It's an "infinite redirection loop".

Also, there are two issues here, and neither would be necessarily stopped by timeout, or the writer of the tutorial would have said so. Again, RTFM!

comment:3 Changed 3 years ago by Robby

Aren't security issues usually reported privately?

comment:4 Changed 3 years ago by cfpp2p

I setup the test environment and produced Denial of Service for the infinite number of redirects scenario.

I simply had transmission request a torrent by URL.

The full cURL log for just 10 minutes of logging was over 500 megabytes. I've attached a sample. The redirects kept increasing as time went on, the later GETs were producing 4096 redirects.

Changed 3 years ago by cfpp2p

comment:5 follow-up: Changed 3 years ago by x190

All we have to do is put a reasonable limit in CURLOPT_MAXREDIRS.

As for the memory-sucking disk-filling first issue mentioned in this ticket, that should be easy enough to fix using my handy-dandy prog_func () in #5923.

EDIT: Please read the entire "Security Considerations" section of the document referenced above. We can and should do better.

For example, I will be using a 'webseed address sanitizing function' similar to the one for peers, in a new version of a patch for #5923, but am unsure if we are covering everything in its current version. See section on "IPv6 Addresses" in the referenced document.

Last edited 3 years ago by x190 (previous) (diff)

comment:6 in reply to: ↑ 5 ; follow-ups: Changed 3 years ago by x190

Replying to x190:

All we have to do is put a reasonable limit in CURLOPT_MAXREDIRS.

Something like this in libtransmission/web.c's createEasy ():

  /* don't allow infinite redirects: https://trac.transmissionbt.com/ticket/6110 */
  curl_easy_setopt (e, CURLOPT_MAXREDIRS, 10L);

As for the memory-sucking disk-filling first issue mentioned in this ticket, that should be easy enough to fix using my handy-dandy prog_func () in #5923.

I think in libtransmission/web.c's writeFunc (), we could count the bytes added to the buffer and abort at MAX_PIECE_SIZE (*"A reasonable upper limit of an expected piece size in the wild is clearly 4 MiB.").

Last edited 3 years ago by x190 (previous) (diff)

comment:7 in reply to: ↑ 6 ; follow-up: Changed 3 years ago by cfpp2p

Replying to x190:

Replying to x190:

All we have to do is put a reasonable limit in CURLOPT_MAXREDIRS.

Something like this in libtransmission/web.c's createEasy ():

  /* don't allow infinite redirects: https://trac.transmissionbt.com/ticket/6110 */
  curl_easy_setopt (e, CURLOPT_MAXREDIRS, 10L);

As for the memory-sucking disk-filling first issue mentioned in this ticket, that should be easy enough to fix using my handy-dandy prog_func () in #5923.

I think in libtransmission/web.c's writeFunc (), we could count the bytes added to the buffer and abort at MAX_PIECE_SIZE (*"A reasonable upper limit of an expected piece size in the wild is clearly 4 MiB.").


CURLEOK
CURLETOOMANYREDIRECTS

path_component_is_suspicious() allows 0-chars (0x00)
overflow when piece size exceeds 4GB
Potential crash-causing overflow in blockCountInPiece

comment:8 in reply to: ↑ 6 Changed 3 years ago by x190

Replying to x190:

Replying to x190:

As for the memory-sucking disk-filling first issue mentioned in this ticket...

I think in libtransmission/web.c's writeFunc (), we could count the bytes added to the buffer and abort at MAX_PIECE_SIZE (*"A reasonable upper limit of an expected piece size in the wild is clearly 4 MiB.").

Anybody up to writing a function to parse the "char * range" that web.c's writeFunc () knows about, and have it do the math and return a value that we can check against while adding bytes, as total bytes added should always be <= that returned value for webseeds, and we could use a max of about 4K for trackers?

comment:9 in reply to: ↑ 7 ; follow-up: Changed 3 years ago by reardon

Replying to cfpp2p:

Replying to x190:

Replying to x190:

All we have to do is put a reasonable limit in CURLOPT_MAXREDIRS.

Something like this in libtransmission/web.c's createEasy ():

  /* don't allow infinite redirects: https://trac.transmissionbt.com/ticket/6110 */
  curl_easy_setopt (e, CURLOPT_MAXREDIRS, 10L);

As for the memory-sucking disk-filling first issue mentioned in this ticket, that should be easy enough to fix using my handy-dandy prog_func () in #5923.

I think in libtransmission/web.c's writeFunc (), we could count the bytes added to the buffer and abort at MAX_PIECE_SIZE (*"A reasonable upper limit of an expected piece size in the wild is clearly 4 MiB.").

Er, isn't this same curl object used to download&upload .torrent files? 4MiB may not be a reasonable limit for the former. perhaps 16MiB?

comment:10 in reply to: ↑ 9 Changed 3 years ago by cfpp2p

Replying to reardon:

Replying to cfpp2p:

Replying to x190:

Replying to x190:

All we have to do is put a reasonable limit in CURLOPT_MAXREDIRS.

Something like this in libtransmission/web.c's createEasy ():

  /* don't allow infinite redirects: https://trac.transmissionbt.com/ticket/6110 */
  curl_easy_setopt (e, CURLOPT_MAXREDIRS, 10L);

As for the memory-sucking disk-filling first issue mentioned in this ticket, that should be easy enough to fix using my handy-dandy prog_func () in #5923.

I think in libtransmission/web.c's writeFunc (), we could count the bytes added to the buffer and abort at MAX_PIECE_SIZE (*"A reasonable upper limit of an expected piece size in the wild is clearly 4 MiB.").

Er, isn't this same curl object used to download&upload .torrent files? 4MiB may not be a reasonable limit for the former. perhaps 16MiB?

comment:4

"I simply had transmission request a torrent by URL."

comment:11 Changed 3 years ago by x190

The redirect problem can be handled as above: (included in https://trac.transmissionbt.com/ticket/5923#comment:32). Code for this item is in web.c.

/* don't allow infinite redirects: https://trac.transmissionbt.com/ticket/6110 */
  curl_easy_setopt (e, CURLOPT_MAXREDIRS, 10L);

The oversized download issue can be addressed for webseeds by parsing the range and limiting bytes added to the task->content buffer to that amount, as ranges are an exact item, either properly supported by the server or not.

For trackers and all other curl tasks, then an upper limit should be determined.

Last edited 3 years ago by x190 (previous) (diff)
Note: See TracTickets for help on using tickets.