Compare commits

...

138 Commits

Author SHA1 Message Date
Philipp Hagemeister
f733b05302 release 2016.01.23 2016-01-23 12:03:12 +01:00
Sergey M․
6fa73386cb [drtv] Use IETF language tag 2016-01-23 01:54:00 +06:00
Sergey M․
5ca01bb9e4 [kanalplay] Use IETF language tag 2016-01-23 01:51:18 +06:00
Sergey M․
1ca59daca9 [options] Clarify language tags 2016-01-23 01:50:06 +06:00
Sergey M․
594c4d79a5 [svt] Improve subtitles extraction and add test (Closes #8265) 2016-01-23 01:47:54 +06:00
Marian Sigler
1f16b958b1 [SVTPlay] Add subtitle support 2016-01-23 01:28:31 +06:00
Sergey M․
4c0d13df9b [lovehomeporn] Add extractor 2016-01-23 00:52:23 +06:00
Sergey M․
b2c6528baf [ruleporn] Rework in terms of nuevo (Closes #8206) 2016-01-23 00:40:11 +06:00
Sergey M․
ea17820432 [nuevo] Improve thumbnail extraction 2016-01-23 00:38:58 +06:00
Dankryn
1257b049bc [ruleporn] Add new extractor 2016-01-23 00:13:27 +06:00
Sergey M․
b969813548 Credit @nexAkari for trollvids and nuevo (#7728) 2016-01-23 00:10:49 +06:00
Sergey M․
10677ece81 [nuevo] Simplify nuevo extractors (Closes #7728) 2016-01-23 00:04:33 +06:00
Andrew "Akari" Alexeyew
d570746e45 [nuevo] Generalize nuevo extractor and add support for trollvids
Supports only the nuevo player for now (most common).

[trollvids] convert duration to an int

[trollvids] added a test

[trollvids] made flake8 shut up

Generalized the Nuevo extractor

Affects: anitube, trollvids, trutube

[nuevo] Complied with the code comments.
2016-01-22 23:29:24 +06:00
Sergey M․
4fcd9d147d [arte:cinema] Add extractor 2016-01-22 23:00:50 +06:00
Sergey M․
9c54ae3387 [arte:future] Make duplicated test matching only 2016-01-22 23:00:05 +06:00
François Charlier
24114fee74 [arte:future] Fix extraction
[arte] Add support for more "Arte Future" uri
2016-01-22 22:58:37 +06:00
Sergey M․
220ee33f2b [cbsnews] Simplify subtitles extraction and fix test (Closes #8295) 2016-01-22 22:23:21 +06:00
John Assael
4118cc02c1 [cbsnews] Extract subtitles
added test function for CBS News subtitles
2016-01-22 22:15:51 +06:00
Jaime Marquínez Ferrándiz
32d77eeb04 [downloader/common] report_retry: Don't crash when retries is infinite (fixes #8299) 2016-01-22 14:49:17 +01:00
Filippo Valsorda
032f232626 Merge pull request #8142 from FiloSottile/filippo/updates
[update] fix (unexploitable) BB'06 vulnerability in rsa_verify
2016-01-21 20:17:37 +00:00
Filippo Valsorda
4d318be195 [update] fix (unexploitable) BB'06 vulnerability in rsa_verify
The rsa_verify code was vulnerable to a BB'06 attack, allowing to forge
signatures for arbitrary messages if and only if the public key exponent is
3.  Since the updates key is hardcoded to 65537, there is no risk for
youtube-dl, but I don't want vulnerable code in the wild.

The new function adopts a way safer approach of encoding-and-comparing to
replace the dangerous parsing code.
2016-01-21 20:12:17 +00:00
Yen Chi Hsuan
6b45f9aba2 [iqiyi] Update key (closes #8292) 2016-01-22 02:14:47 +08:00
Sergey M․
1e10d02fec [hitbox] Skip subscribe only formats (Closes #8217) 2016-01-21 23:28:22 +06:00
Sergey M․
51290d8457 [youtube] Simplify automatic captions URL check (Closes #8287) 2016-01-21 22:58:03 +06:00
Dimitre Liotev
582f4f834e Fix issue #8109 (error when downloading automatic captions) 2016-01-21 22:55:36 +06:00
Sergey M․
e87d98b0dd [yahoo] Add improve content id regexes (Closes #8290) 2016-01-21 22:42:50 +06:00
igv
383496e65e Additional regex for yahoo extractor 2016-01-21 22:35:28 +06:00
Jaime Marquínez Ferrándiz
4519c1f43c [vimeo] 'ext' must be a string, not a tuple (fixes #8288)
There was an ',' at the end of the line.
2016-01-21 12:43:45 +01:00
Sergey M․
a616f65471 [tube8] PEP 8 2016-01-20 21:30:29 +06:00
CeruleanSky
1f78ed189a [OraTV] update extractor
"current" is now "video"
"hls_stream" is now hls_stream without quotes
video_id is now id
duration for current video is not present(for other videos it is)

modified regex to find hls_stream variable to work reguardless of whether it is quoted or not.

[ora] Improve (Closes #8273)
2016-01-20 21:28:42 +06:00
Sergey M․
7dde358adc [tube8] Extract duration and modernize 2016-01-20 20:07:32 +06:00
Sergey M․
27b83249c9 [tube8] Fix extraction and extract all formats (Closes #8281) 2016-01-20 20:00:51 +06:00
Yen Chi Hsuan
56aa074538 Credit @FounderSG for WeiqiTV and LetvCloud (#7994)
[ci skip]
2016-01-20 13:20:03 +08:00
Jaime Marquínez Ferrándiz
9d90e7de03 [downloader/hls] Ask ffmpeg to quit when interrupting youtube-dl with 'Ctrl+C' (#8252)
Otherwise the mp4 file can't be played.
2016-01-19 22:07:14 +01:00
Yen Chi Hsuan
7d4d9c526a Merge branch 'ping-patch-8239' 2016-01-20 04:22:25 +08:00
Yen Chi Hsuan
fe6856b059 [neteasemusic] Use float_or_none 2016-01-20 04:21:51 +08:00
Yen Chi Hsuan
a54fbf2ca6 Merge branch 'patch-8239' of https://github.com/ping/youtube-dl into ping-patch-8239 2016-01-20 04:15:46 +08:00
Yen Chi Hsuan
d8024aebe5 Merge branch 'FounderSG-Weiqitv' 2016-01-20 04:06:09 +08:00
Yen Chi Hsuan
8652bd22f1 [weiqitv] Use single quotes 2016-01-20 04:04:39 +08:00
Yen Chi Hsuan
f15a9ca301 [weiqitv] Rename the extractor - capitilize 'TV' 2016-01-20 04:03:57 +08:00
Yen Chi Hsuan
65ced034b8 [weiqitv] Make codes shorter 2016-01-20 04:02:30 +08:00
Yen Chi Hsuan
bec30224ff [letv] LetvCloud: Detect ext instead of the hardcoded one 2016-01-20 04:00:37 +08:00
Yen Chi Hsuan
0428106da3 [letv] LetvCloud: make title looks like a title 2016-01-20 03:53:17 +08:00
Yen Chi Hsuan
73e7442456 [letv] LetvCloud: simplify and improve _VALID_URL 2016-01-20 03:42:01 +08:00
Yen Chi Hsuan
26de1bba83 [letv] LetvCloud: check error messages from server 2016-01-20 03:31:34 +08:00
Yen Chi Hsuan
e0690782b8 [letv] LetvCloud: guard against invalid URLs 2016-01-20 03:25:12 +08:00
Yen Chi Hsuan
8fff4f61e5 [letv] Use single quotes 2016-01-20 03:18:54 +08:00
Yen Chi Hsuan
10defdd06a [letv] Reduce duplicated codes 2016-01-20 03:17:35 +08:00
Sergey M․
485139c15c [viewster] Tolerate missing synopsis (Closes #8274) 2016-01-20 00:02:46 +06:00
Sergey M․
b605ebb609 [lemonde] Add extractor 2016-01-19 22:09:55 +06:00
Sergey M․
aecfcd4e59 [ultimedia] Rename to digiteka 2016-01-19 21:51:46 +06:00
Sergey M․
942d46196f [ultimedia] Extend _VALID_URL to support digiteka 2016-01-19 21:47:06 +06:00
Yen Chi Hsuan
78be2eca7c Merge branch 'Weiqitv' of https://github.com/FounderSG/youtube-dl into FounderSG-Weiqitv 2016-01-19 23:39:32 +08:00
Sergey M․
1fa2b9841d [extractor/generic] Extend dailymotion embed regex 2016-01-19 21:20:45 +06:00
Sergey M․
9fbd0822aa [dailymotion] Extend _VALID_URL 2016-01-19 21:20:14 +06:00
Sergey M․
e323cf3ff3 [youtube] Skip test 2016-01-19 20:56:04 +06:00
Sergey M․
8ceabd4df3 [youtube] Capture and output unavailable message 2016-01-19 20:54:43 +06:00
Sergey M․
a8776b107b [youtube] Clarify test_Youtube_18 2016-01-18 23:19:38 +06:00
Sergey M․
096b533982 [youtube] Fix URL expansion in video description
Fixes test_Youtube_18
2016-01-18 23:17:45 +06:00
Sergey M․
dae503afaa [atresplayer] Skip HLS completely (Closes #8261) 2016-01-17 22:14:07 +06:00
Yen Chi Hsuan
b39eab7f94 Merge pull request #8262 from jwilk/https-everywhere
[ustream] Use HTTPS for GitHub URL
2016-01-17 22:10:03 +08:00
Jakub Wilk
e5a66240c0 [ustream] Use HTTPS for GitHub URL 2016-01-17 15:06:00 +01:00
ping
e0ef13ddeb [neteasemusic] Fallback to alt hosts if m5.music.126.net doesn't work 2016-01-17 07:48:46 +08:00
Sergey M․
855f90fa6f [ae] Rename to aenetworks and clarify extractor name and description 2016-01-17 03:02:45 +06:00
Yen Chi Hsuan
614db89ae3 [compat] Clarify the versions requiring compat_kwargs
It's supported since 2.7.0 alpha 1 and 2.6.5 rc 1. See
https://hg.python.org/cpython/file/v2.7a1/Misc/NEWS#l337
https://hg.python.org/cpython/file/v2.6.5rc1/Misc/NEWS#l28
2016-01-16 22:17:31 +08:00
Yen Chi Hsuan
1358b94163 [ae] Fix _TESTS 2016-01-16 20:56:53 +08:00
Yen Chi Hsuan
350e02d40d [bbc] Use _search_json_ld 2016-01-16 20:46:28 +08:00
Yen Chi Hsuan
0b26ba3fc8 [extractor/common] Allow passing more parameters to _search_json_ld 2016-01-16 20:45:36 +08:00
ping
3a0a78731b Fixes #8239 2016-01-16 12:17:07 +08:00
Sergey M
6be16ed24b [README.md] Add protocol usage example in format selection 2016-01-16 10:15:24 +06:00
Sergey M․
b555942428 [YoutubeDL] Ensure protocol is always present 2016-01-16 10:10:28 +06:00
Sergey M
b2dca40d81 [README.md] Improve format selection documentation 2016-01-16 09:55:52 +06:00
Sergey M
15870bbd01 [README.md] Mention new string operators for format selection 2016-01-16 09:53:31 +06:00
Yen Chi Hsuan
10d33b3473 [YoutubeDL] Introduce CSS3 like string operators 2016-01-16 09:53:12 +06:00
Sergey M
ac25992bc7 Merge pull request #8246 from dstftw/initial-json-ld-metadata-support
Initial JSON-LD metadata extraction support
2016-01-16 07:20:15 +06:00
Sergey M
30783c442d Merge pull request #8245 from dstftw/auto-generate-title-fields
[YoutubeDL] Auto generate title fields corresponding to the *_number fields
2016-01-16 07:20:03 +06:00
Sergey M․
a50a8003a0 [cultureunplugged] Improve (Closes #8060) 2016-01-16 07:10:51 +06:00
Sergey M․
315bdae00a [zippcast] Improve (Closes #8198) 2016-01-16 06:27:34 +06:00
ckuu
2ddfd26f1b '[ZippCast] Add new extractor'
Closes rg3/youtube-dl#6591
2016-01-16 06:25:06 +06:00
Philipp Hagemeister
f3ed5df611 release 2016.01.15 2016-01-15 19:43:04 +01:00
Sergey M․
b4e44234bc [ae] Use JSON-LD for TV series metadata 2016-01-16 00:36:49 +06:00
Sergey M․
4ca2a3cf3c [extractor/common] Add initial support for JSON-LD metadata extraction into info_dict 2016-01-16 00:36:02 +06:00
Sergey M․
33d2fc2f64 [YoutubeDL] Auto generate title fields corresponding to the *_number fields
Auto generate title fields corresponding to the *_number fields when missing in order to always have clean titles. This is very common for TV series.
2016-01-16 00:09:54 +06:00
remitamine
27a95f51aa [cwtv] Add new extractor 2016-01-15 17:45:51 +01:00
Sergey M․
a78d6a9bb1 [ae] Improve _VALID_URL 2016-01-15 22:13:48 +06:00
Sergey M․
567f9a5809 [ae] Add extractor import 2016-01-15 22:12:51 +06:00
Sergey M․
3a421c724f [history] Remove import (Closes #8243) 2016-01-15 22:10:07 +06:00
Sergey M․
34dd81c03a [xtube:user] Fix extraction (Closes #8224) 2016-01-15 21:35:20 +06:00
Sergey M․
b3f502cdb9 [xtube] Add shortcut 2016-01-15 21:28:36 +06:00
remitamine
587dfd44a4 [ae] Add support for fyi.tv, aetv.com and mylifetime.com(closes #3599) 2016-01-15 16:18:07 +01:00
remitamine
52767c1ba0 [history] add support for episode pages(fixes #8240) 2016-01-15 15:16:57 +01:00
remitamine
014b5c59d8 [theplatform] extend _VALID_URL regex 2016-01-15 15:12:35 +01:00
remitamine
fad7a336a1 Revert "[history] fix signature and media url extraction(fixes #8240)"
This reverts commit ffbc0baf72.
2016-01-15 14:54:39 +01:00
remitamine
ffbc0baf72 [history] fix signature and media url extraction(fixes #8240) 2016-01-15 12:35:31 +01:00
Sergey M
345f12196c Merge pull request #8228 from jaimeMF/disable-file-handler
[YoutubeDL] urlopen: disable the 'file:' protocol (#8227)
2016-01-14 22:20:02 +05:00
Sergey M․
5769b68bc0 Credit @TomGijselinck for canvas (#7145) 2016-01-14 23:15:26 +06:00
Sergey M․
4e2743abd9 [canvas] Improve (Closes #7145) 2016-01-14 23:15:12 +06:00
Tom Gijselinck
be2d40a58a [Canvas] Add new extractor 2016-01-14 23:14:41 +06:00
Sergey M․
81549898c0 [prosiebensat1] Fix some extraction and update tests 2016-01-14 22:45:09 +06:00
Lucas
0baedd1851 [prosiebensat1] add support for 7tv.de 2016-01-14 22:14:04 +06:00
Sergey M․
6b559c2fbc [ntvde] Improve regex 2016-01-14 22:12:24 +06:00
Sergey M․
986986064e [orf:fm4] Add test 2016-01-14 22:11:33 +06:00
Sergey M․
4654c1d016 [orf:fm4] Extend _VALID_URL (Closes #8234) 2016-01-14 22:07:42 +06:00
Sergey M․
163e8369b0 [ntvde] Fix extraction 2016-01-14 22:05:04 +06:00
Sergey M․
5cc9c5dfa8 [unistra] Fix extraction 2016-01-14 21:53:24 +06:00
Sergey M․
fbd90643cb [vodlocker] Fix extraction (Closes #8231) 2016-01-14 21:48:08 +06:00
Jaime Marquínez Ferrándiz
30e2f2d76f [YoutubeDL] use a more correct terminology in the error message for file:// URLs 2016-01-14 16:28:46 +01:00
Philipp Hagemeister
11c60089a8 release 2016.01.14 2016-01-14 15:43:21 +01:00
Sergey M․
abb893e6e4 [beeg] Update API URL 2016-01-14 19:57:56 +06:00
Sergey M․
4511c1976d [beeg] Fix extraction (Closes #8225) 2016-01-14 19:57:20 +06:00
Jaime Marquínez Ferrándiz
4240d50496 [YoutubeDL] improve error message for file:/// URLs 2016-01-14 14:07:54 +01:00
Jaime Marquínez Ferrándiz
6240b0a278 [YoutubeDL] urlopen: use build_opener again
Otherwise we would need to manually add handlers like HTTPRedirectHandler, instead we add a customized FileHandler instance that raises an error.
2016-01-14 08:16:39 +01:00
Jaime Marquínez Ferrándiz
e37afbe0b8 [YoutubeDL] urlopen: disable the 'file:' protocol (#8227)
If someone is running youtube-dl on a server to deliver files, the user could input 'file:///some/important/file' and youtube-dl would save that file as a video giving access to sensitive information to the user.
'file:' urls can be filtered, but the user can use an URL to a crafted m3u8 manifest like:

    #EXTM3U
    #EXT-X-MEDIA-SEQUENCE:0
    #EXTINF:10.0
    file:///etc/passwd
    #EXT-X-ENDLIST

With this patch 'file:' URLs raise URLError like for unknown protocols.
2016-01-14 00:24:04 +01:00
remitamine
40cf7fcbd2 [tudou] Add support for Albums and Playlists and extract more metadata 2016-01-13 13:29:00 +01:00
Yen Chi Hsuan
cc28492d31 [youtube] Fix acodec and vcodec order
In RFC6381, there's no rule stating that the first part of codecs should
be video and the second part should be audio, while it seems the case
for data reported by YouTube.
2016-01-13 17:05:38 +08:00
Sergey M․
bc0550c262 [pluralsight] Fix new player (Closes #8215) 2016-01-13 08:18:37 +06:00
Sergey M․
b83b782dc4 [downloader/fragment] Move helper data to context dict 2016-01-13 00:00:31 +06:00
Sergey M․
16a348475c [dailymotion] Prefer direct links (Closes #8156) 2016-01-12 23:23:39 +06:00
Sergey M․
709185a264 [downloader/fragment] More smooth calculations
`downloaded_bytes` is now updated on each fragment progress hook invocation
2016-01-12 23:18:38 +06:00
Sergey M․
9cb1a06b6c [downloader/fragment] Remove unused code and fix zero division error 2016-01-12 22:09:38 +06:00
Sergey M․
be27283ef6 [iprima] Mark broken 2016-01-11 22:00:17 +06:00
Sergey M․
b924bfad68 [videott] Mark broken 2016-01-11 21:58:32 +06:00
Sergey M․
192b9a571c [videomega] Mark broken 2016-01-11 21:56:19 +06:00
remitamine
6ec6cb4e95 Revert "fix typos"
This reverts commit 36a0e46c39.
2016-01-10 19:27:22 +01:00
remitamine
36a0e46c39 fix typos 2016-01-10 17:55:41 +01:00
Jakub Wilk
dfb1b1468c Fix typos
Closes #8200.
2016-01-10 17:24:28 +01:00
Jaime Marquínez Ferrándiz
3c91e41614 [downloader/fragment] Don't fail if the 'Content-Length' header is missing
In some dailymotion videos (like http://www.dailymotion.com/video/x3k0dtv from #8156) the segments URLs don't have the 'Content-Length' header and HttpFD sets the 'totat_bytes' field to None, so we also use '0' in that case (since we do different math operations with it).
2016-01-10 14:41:38 +01:00
Jaime Marquínez Ferrándiz
7e8a800f29 [bigflix] Use correct indentation to make flake8 happy 2016-01-10 14:26:27 +01:00
remitamine
2334762b03 [shahid] raise ExtractorError if the video is DRM protected 2016-01-10 07:55:58 +01:00
remitamine
3fc088f8c7 [dcn] extract video ids in season entries 2016-01-10 07:45:41 +01:00
Sergey M․
a9bbd26f1d [bigflix] Improve formats extraction 2016-01-10 10:49:27 +06:00
Sergey M․
6e99d5762a [bigflix] Extract all formats 2016-01-10 10:31:36 +06:00
Sergey M․
15b1c6656f Credit @vickyg3 for bigflix (#8194) 2016-01-10 10:03:56 +06:00
Sergey M
d412794205 Merge pull request #8194 from vickyg3/bigflix_ie
[Bigflix] Add new extractor for bigflix.com
2016-01-10 09:02:18 +05:00
Vignesh Venkat
0a899a1448 [Bigflix] Add new extractor for bigflix.com
Add an IE to support bigflix.com. It uses some sort of silverlight
plugin whose video url is being populated using base64 encoded
flashvars. So it is quite straightforward to extract.
2016-01-09 19:45:58 -08:00
Sergey M․
7a34302e95 [canalc2] Fix extraction (Closes #8191) 2016-01-10 01:37:10 +06:00
Jaime Marquínez Ferrándiz
27783821af [xhamster] Remove unused import 2016-01-09 11:16:23 +01:00
Founder Fang
5f432ac8f5 [Weiqitv] Add new extractor 2015-12-22 06:21:56 +08:00
81 changed files with 1523 additions and 401 deletions

View File

@@ -151,3 +151,7 @@ Muratcan Simsek
Evan Lu
flatgreen
Brian Foley
Vignesh Venkat
Tom Gijselinck
Founder Fang
Andrew Alexeyew

View File

@@ -339,8 +339,8 @@ which means you can modify it, redistribute it or use it however you like.
preference, for example: "srt" or
"ass/srt/best"
--sub-lang LANGS Languages of the subtitles to download
(optional) separated by commas, use IETF
language tags like 'en,pt'
(optional) separated by commas, use --list-
subs for available language tags
## Authentication Options:
-u, --username USERNAME Login with this account ID
@@ -464,15 +464,77 @@ youtube-dl_test_video_.mp4 # A simple file name
# FORMAT SELECTION
By default youtube-dl tries to download the best quality, but sometimes you may want to download in a different format.
The simplest case is requesting a specific format, for example `-f 22`. You can get the list of available formats using `--list-formats`, you can also use a file extension (currently it supports aac, m4a, mp3, mp4, ogg, wav, webm) or the special names `best`, `bestvideo`, `bestaudio` and `worst`.
By default youtube-dl tries to download the best available quality, i.e. if you want the best quality you **don't need** to pass any special options, youtube-dl will guess it for you by **default**.
If you want to download multiple videos and they don't have the same formats available, you can specify the order of preference using slashes, as in `-f 22/17/18`. You can also filter the video results by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"`). This works for filesize, height, width, tbr, abr, vbr, asr, and fps and the comparisons <, <=, >, >=, =, != and for ext, acodec, vcodec, container, and protocol and the comparisons =, != . Formats for which the value is not known are excluded unless you put a question mark (?) after the operator. You can combine format filters, so `-f "[height <=? 720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s. Use commas to download multiple formats, such as `-f 136/137/mp4/bestvideo,140/m4a/bestaudio`. You can merge the video and audio of two formats into a single file using `-f <video-format>+<audio-format>` (requires ffmpeg or avconv), for example `-f bestvideo+bestaudio`. Format selectors can also be grouped using parentheses, for example if you want to download the best mp4 and webm formats with a height lower than 480 you can use `-f '(mp4,webm)[height<480]'`.
But sometimes you may want to download in a different format, for example when you are on a slow or intermittent connection. The key mechanism for achieving this is so called *format selection* based on which you can explicitly specify desired format, select formats based on some criterion or criteria, setup precedence and much more.
Since the end of April 2015 and version 2015.04.26 youtube-dl uses `-f bestvideo+bestaudio/best` as default format selection (see #5447, #5456). If ffmpeg or avconv are installed this results in downloading `bestvideo` and `bestaudio` separately and muxing them together into a single file giving the best overall quality available. Otherwise it falls back to `best` and results in downloading the best available quality served as a single file. `best` is also needed for videos that don't come from YouTube because they don't provide the audio and video in two different files. If you want to only download some dash formats (for example if you are not interested in getting videos with a resolution higher than 1080p), you can add `-f bestvideo[height<=?1080]+bestaudio/best` to your configuration file. Note that if you use youtube-dl to stream to `stdout` (and most likely to pipe it to your media player then), i.e. you explicitly specify output template as `-o -`, youtube-dl still uses `-f best` format selection in order to start content delivery immediately to your player and not to wait until `bestvideo` and `bestaudio` are downloaded and muxed.
The general syntax for format selection is `--format FORMAT` or shorter `-f FORMAT` where `FORMAT` is a *selector expression*, i.e. an expression that describes format or formats you would like to download.
The simplest case is requesting a specific format, for example with `-f 22` you can download the format with format code equal to 22. You can get the list of available format codes for particular video using `--list-formats` or `-F`. Note that these format codes are extractor specific.
You can also use a file extension (currently `3gp`, `aac`, `flv`, `m4a`, `mp3`, `mp4`, `ogg`, `wav`, `webm` are supported) to download best quality format of particular file extension served as a single file, e.g. `-f webm` will download best quality format with `webm` extension served as a single file.
You can also use special names to select particular edge case format:
- `best`: Select best quality format represented by single file with video and audio
- `worst`: Select worst quality format represented by single file with video and audio
- `bestvideo`: Select best quality video only format (e.g. DASH video), may not be available
- `worstvideo`: Select worst quality video only format, may not be available
- `bestaudio`: Select best quality audio only format, may not be available
- `worstaudio`: Select worst quality audio only format, may not be available
For example, to download worst quality video only format you can use `-f worstvideo`.
If you want to download multiple videos and they don't have the same formats available, you can specify the order of preference using slashes. Note that slash is left-associative, i.e. formats on the left hand side are preferred, for example `-f 22/17/18` will download format 22 if it's available, otherwise it will download format 17 if it's available, otherwise it will download format 18 if it's available, otherwise it will complain that no suitable formats are available for download.
If you want to download several formats of the same video use comma as a separator, e.g. `-f 22,17,18` will download all these three formats, of course if they are available. Or more sophisticated example combined with precedence feature `-f 136/137/mp4/bestvideo,140/m4a/bestaudio`.
You can also filter the video formats by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"`).
The following numeric meta fields can be used with comparisons `<`, `<=`, `>`, `>=`, `=` (equals), `!=` (not equals):
- `filesize`: The number of bytes, if known in advance
- `width`: Width of the video, if known
- `height`: Height of the video, if known
- `tbr`: Average bitrate of audio and video in KBit/s
- `abr`: Average audio bitrate in KBit/s
- `vbr`: Average video bitrate in KBit/s
- `asr`: Audio sampling rate in Hertz
- `fps`: Frame rate
Also filtering work for comparisons `=` (equals), `!=` (not equals), `^=` (begins with), `$=` (ends with), `*=` (contains) and following string meta fields:
- `ext`: File extension
- `acodec`: Name of the audio codec in use
- `vcodec`: Name of the video codec in use
- `container`: Name of the container format
- `protocol`: The protocol that will be used for the actual download, lower-case. `http`, `https`, `rtsp`, `rtmp`, `rtmpe`, `m3u8`, or `m3u8_native`
Note that none of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by video hoster.
Formats for which the value is not known are excluded unless you put a question mark (`?`) after the operator. You can combine format filters, so `-f "[height <=? 720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s.
You can merge the video and audio of two formats into a single file using `-f <video-format>+<audio-format>` (requires ffmpeg or avconv installed), for example `-f bestvideo+bestaudio` will download best video only format, best audio only format and mux them together with ffmpeg/avconv.
Format selectors can also be grouped using parentheses, for example if you want to download the best mp4 and webm formats with a height lower than 480 you can use `-f '(mp4,webm)[height<480]'`.
Since the end of April 2015 and version 2015.04.26 youtube-dl uses `-f bestvideo+bestaudio/best` as default format selection (see #5447, #5456). If ffmpeg or avconv are installed this results in downloading `bestvideo` and `bestaudio` separately and muxing them together into a single file giving the best overall quality available. Otherwise it falls back to `best` and results in downloading the best available quality served as a single file. `best` is also needed for videos that don't come from YouTube because they don't provide the audio and video in two different files. If you want to only download some DASH formats (for example if you are not interested in getting videos with a resolution higher than 1080p), you can add `-f bestvideo[height<=?1080]+bestaudio/best` to your configuration file. Note that if you use youtube-dl to stream to `stdout` (and most likely to pipe it to your media player then), i.e. you explicitly specify output template as `-o -`, youtube-dl still uses `-f best` format selection in order to start content delivery immediately to your player and not to wait until `bestvideo` and `bestaudio` are downloaded and muxed.
If you want to preserve the old format selection behavior (prior to youtube-dl 2015.04.26), i.e. you want to download the best available quality media served as a single file, you should explicitly specify your choice with `-f best`. You may want to add it to the [configuration file](#configuration) in order not to type it every time you run youtube-dl.
Examples (note on Windows you may need to use double quotes instead of single):
```bash
# Download best mp4 format available or any other best if no mp4 available
$ youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best'
# Download best format available but not better that 480p
$ youtube-dl -f 'bestvideo[height<=480]+bestaudio/best[height<=480]'
# Download best video only format but no bigger that 50 MB
$ youtube-dl -f 'best[filesize<50M]'
# Download best format available via direct link over HTTP/HTTPS protocol
$ youtube-dl -f '(bestvideo+bestaudio/best)[protocol^=http]'
```
# VIDEO SELECTION
Videos can be filtered by their upload date using the options `--date`, `--datebefore` or `--dateafter`. They accept dates in two formats:

View File

@@ -5,7 +5,7 @@ from __future__ import with_statement, unicode_literals
import datetime
import glob
import io # For Python 2 compatibilty
import io # For Python 2 compatibility
import os
import re

View File

@@ -24,6 +24,7 @@
- **AdobeTVShow**
- **AdobeTVVideo**
- **AdultSwim**
- **aenetworks**: A+E Networks: A&E, Lifetime, History.com, FYI Network
- **Aftonbladet**
- **AirMozilla**
- **AlJazeera**
@@ -42,6 +43,7 @@
- **ARD:mediathek**
- **arte.tv**
- **arte.tv:+7**
- **arte.tv:cinema**
- **arte.tv:concert**
- **arte.tv:creative**
- **arte.tv:ddc**
@@ -65,6 +67,7 @@
- **Beeg**
- **BehindKink**
- **Bet**
- **Bigflix**
- **Bild**: Bild.de
- **BiliBili**
- **BleacherReport**
@@ -84,6 +87,7 @@
- **CamdemyFolder**
- **canalc2.tv**
- **Canalplus**: canalplus.fr, piwiplus.fr and d8.tv
- **Canvas**
- **CBS**
- **CBSNews**: CBS News
- **CBSSports**
@@ -121,6 +125,8 @@
- **CSpan**: C-SPAN
- **CtsNews**: 華視新聞
- **culturebox.francetvinfo.fr**
- **CultureUnplugged**
- **CWTV**
- **dailymotion**
- **dailymotion:playlist**
- **dailymotion:user**
@@ -137,6 +143,7 @@
- **defense.gouv.fr**
- **democracynow**
- **DHM**: Filmarchiv - Deutsches Historisches Museum
- **Digiteka**
- **Discovery**
- **Dotsub**
- **DouyuTV**: 斗鱼
@@ -228,7 +235,6 @@
- **Helsinki**: helsinki.fi
- **HentaiStigma**
- **HistoricFilms**
- **History**
- **hitbox**
- **hitbox:live**
- **HornBunny**
@@ -251,7 +257,7 @@
- **Instagram**
- **instagram:user**: Instagram user profile
- **InternetVideoArchive**
- **IPrima**
- **IPrima** (Currently broken)
- **iqiyi**: 爱奇艺
- **Ir90Tv**
- **ivi**: ivi.ru
@@ -284,7 +290,9 @@
- **la7.tv**
- **Laola1Tv**
- **Lecture2Go**
- **Lemonde**
- **Letv**: 乐视网
- **LetvCloud**: 乐视云
- **LetvPlaylist**
- **LetvTv**
- **Libsyn**
@@ -297,6 +305,7 @@
- **livestream**
- **livestream:original**
- **LnkGo**
- **LoveHomePorn**
- **lrt.lt**
- **lynda**: lynda.com videos
- **lynda:course**: lynda.com online courses
@@ -483,6 +492,7 @@
- **rtve.es:live**: RTVE.es live streams
- **RTVNH**
- **RUHD**
- **RulePorn**
- **rutube**: Rutube videos
- **rutube:channel**: Rutube channels
- **rutube:embed**: Rutube embedded videos
@@ -599,10 +609,13 @@
- **ToypicsUser**: Toypics user profile
- **TrailerAddict** (Currently broken)
- **Trilulilu**
- **trollvids**
- **TruTube**
- **Tube8**
- **TubiTv**
- **Tudou**
- **tudou**
- **tudou:album**
- **tudou:playlist**
- **Tumblr**
- **tunein:clip**
- **tunein:program**
@@ -635,7 +648,6 @@
- **udemy**
- **udemy:course**
- **UDNEmbed**: 聯合影音
- **Ultimedia**
- **Unistra**
- **Urort**: NRK P3 Urørt
- **ustream**
@@ -655,12 +667,12 @@
- **video.mit.edu**
- **VideoDetective**
- **videofy.me**
- **VideoMega**
- **VideoMega** (Currently broken)
- **videomore**
- **videomore:season**
- **videomore:video**
- **VideoPremium**
- **VideoTt**: video.tt - Your True Tube
- **VideoTt**: video.tt - Your True Tube (Currently broken)
- **videoweed**: VideoWeed
- **Vidme**
- **Vidzi**
@@ -702,6 +714,7 @@
- **WebOfStories**
- **WebOfStoriesPlaylist**
- **Weibo**
- **WeiqiTV**: WQTV
- **wholecloud**: WholeCloud
- **Wimp**
- **Wistia**
@@ -753,3 +766,4 @@
- **ZDFChannel**
- **zingmp3:album**: mp3.zing.vn albums
- **zingmp3:song**: mp3.zing.vn songs
- **ZippCast**

View File

@@ -12,7 +12,7 @@ import copy
from test.helper import FakeYDL, assertRegexpMatches
from youtube_dl import YoutubeDL
from youtube_dl.compat import compat_str
from youtube_dl.compat import compat_str, compat_urllib_error
from youtube_dl.extractor import YoutubeIE
from youtube_dl.postprocessor.common import PostProcessor
from youtube_dl.utils import ExtractorError, match_filter_func
@@ -631,6 +631,11 @@ class TestYoutubeDL(unittest.TestCase):
result = get_ids({'playlist_items': '10'})
self.assertEqual(result, [])
def test_urlopen_no_file_protocol(self):
# see https://github.com/rg3/youtube-dl/issues/8227
ydl = YDL()
self.assertRaises(compat_urllib_error.URLError, ydl.urlopen, 'file:///etc/passwd')
if __name__ == '__main__':
unittest.main()

30
test/test_update.py Normal file
View File

@@ -0,0 +1,30 @@
#!/usr/bin/env python
from __future__ import unicode_literals
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import json
from youtube_dl.update import rsa_verify
class TestUpdate(unittest.TestCase):
def test_rsa_verify(self):
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'versions.json'), 'rb') as f:
versions_info = f.read().decode()
versions_info = json.loads(versions_info)
signature = versions_info['signature']
del versions_info['signature']
self.assertTrue(rsa_verify(
json.dumps(versions_info, sort_keys=True).encode('utf-8'),
signature, UPDATES_RSA_KEY))
if __name__ == '__main__':
unittest.main()

View File

@@ -66,7 +66,7 @@ class TestAnnotations(unittest.TestCase):
textTag = a.find('TEXT')
text = textTag.text
self.assertTrue(text in expected) # assertIn only added in python 2.7
# remove the first occurance, there could be more than one annotation with the same text
# remove the first occurrence, there could be more than one annotation with the same text
expected.remove(text)
# We should have seen (and removed) all the expected annotation texts.
self.assertEqual(len(expected), 0, 'Not all expected annotations were found.')

34
test/versions.json Normal file
View File

@@ -0,0 +1,34 @@
{
"latest": "2013.01.06",
"signature": "72158cdba391628569ffdbea259afbcf279bbe3d8aeb7492690735dc1cfa6afa754f55c61196f3871d429599ab22f2667f1fec98865527b32632e7f4b3675a7ef0f0fbe084d359256ae4bba68f0d33854e531a70754712f244be71d4b92e664302aa99653ee4df19800d955b6c4149cd2b3f24288d6e4b40b16126e01f4c8ce6",
"versions": {
"2013.01.02": {
"bin": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl",
"f5b502f8aaa77675c4884938b1e4871ebca2611813a0c0e74f60c0fbd6dcca6b"
],
"exe": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl.exe",
"75fa89d2ce297d102ff27675aa9d92545bbc91013f52ec52868c069f4f9f0422"
],
"tar": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl-2013.01.02.tar.gz",
"6a66d022ac8e1c13da284036288a133ec8dba003b7bd3a5179d0c0daca8c8196"
]
},
"2013.01.06": {
"bin": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl",
"64b6ed8865735c6302e836d4d832577321b4519aa02640dc508580c1ee824049"
],
"exe": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl.exe",
"58609baf91e4389d36e3ba586e21dab882daaaee537e4448b1265392ae86ff84"
],
"tar": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl-2013.01.06.tar.gz",
"fe77ab20a95d980ed17a659aa67e371fdd4d656d19c4c7950e7b720b0c2f1a86"
]
}
}
}

View File

@@ -46,6 +46,7 @@ from .utils import (
DateRange,
DEFAULT_OUTTMPL,
determine_ext,
determine_protocol,
DownloadError,
encode_compat_str,
encodeFilename,
@@ -898,6 +899,9 @@ class YoutubeDL(object):
STR_OPERATORS = {
'=': operator.eq,
'!=': operator.ne,
'^=': lambda attr, value: attr.startswith(value),
'$=': lambda attr, value: attr.endswith(value),
'*=': lambda attr, value: value in attr,
}
str_operator_rex = re.compile(r'''(?x)
\s*(?P<key>ext|acodec|vcodec|container|protocol)
@@ -1244,6 +1248,12 @@ class YoutubeDL(object):
except (ValueError, OverflowError, OSError):
pass
# Auto generate title fields corresponding to the *_number fields when missing
# in order to always have clean titles. This is very common for TV series.
for field in ('chapter', 'season', 'episode'):
if info_dict.get('%s_number' % field) is not None and not info_dict.get(field):
info_dict[field] = '%s %d' % (field.capitalize(), info_dict['%s_number' % field])
subtitles = info_dict.get('subtitles')
if subtitles:
for _, subtitle in subtitles.items():
@@ -1300,6 +1310,10 @@ class YoutubeDL(object):
# Automatically determine file extension if missing
if 'ext' not in format:
format['ext'] = determine_ext(format['url']).lower()
# Automatically determine protocol if missing (useful for format
# selection purposes)
if 'protocol' not in format:
format['protocol'] = determine_protocol(format)
# Add HTTP headers, so that external programs can use them from the
# json output
full_format_info = info_dict.copy()
@@ -1312,7 +1326,7 @@ class YoutubeDL(object):
# only set the 'formats' fields if the original info_dict list them
# otherwise we end up with a circular reference, the first (and unique)
# element in the 'formats' field in info_dict is info_dict itself,
# wich can't be exported to json
# which can't be exported to json
info_dict['formats'] = formats
if self.params.get('listformats'):
self.list_formats(info_dict)
@@ -1986,8 +2000,19 @@ class YoutubeDL(object):
https_handler = make_HTTPS_handler(self.params, debuglevel=debuglevel)
ydlh = YoutubeDLHandler(self.params, debuglevel=debuglevel)
data_handler = compat_urllib_request_DataHandler()
# When passing our own FileHandler instance, build_opener won't add the
# default FileHandler and allows us to disable the file protocol, which
# can be used for malicious purposes (see
# https://github.com/rg3/youtube-dl/issues/8227)
file_handler = compat_urllib_request.FileHandler()
def file_open(*args, **kwargs):
raise compat_urllib_error.URLError('file:// scheme is explicitly disabled in youtube-dl for security reasons')
file_handler.file_open = file_open
opener = compat_urllib_request.build_opener(
proxy_handler, https_handler, cookie_processor, ydlh, data_handler)
proxy_handler, https_handler, cookie_processor, ydlh, data_handler, file_handler)
# Delete the default user-agent header, which would otherwise apply in
# cases where our custom HTTP handler doesn't come into play

View File

@@ -433,7 +433,7 @@ if sys.version_info < (3, 0) and sys.platform == 'win32':
else:
compat_getpass = getpass.getpass
# Old 2.6 and 2.7 releases require kwargs to be bytes
# Python < 2.6.5 require kwargs to be bytes
try:
def _testfunc(x):
pass

View File

@@ -295,7 +295,7 @@ class FileDownloader(object):
def report_retry(self, count, retries):
"""Report retry in case of HTTP error 5xx"""
self.to_screen('[download] Got server HTTP error. Retrying (attempt %d of %d)...' % (count, retries))
self.to_screen('[download] Got server HTTP error. Retrying (attempt %d of %.0f)...' % (count, retries))
def report_file_already_downloaded(self, file_name):
"""Report file has already been fully downloaded."""

View File

@@ -59,37 +59,43 @@ class FragmentFD(FileDownloader):
'filename': ctx['filename'],
'tmpfilename': ctx['tmpfilename'],
}
start = time.time()
ctx['started'] = start
ctx.update({
'started': start,
# Total complete fragments downloaded so far in bytes
'complete_frags_downloaded_bytes': 0,
# Amount of fragment's bytes downloaded by the time of the previous
# frag progress hook invocation
'prev_frag_downloaded_bytes': 0,
})
def frag_progress_hook(s):
if s['status'] not in ('downloading', 'finished'):
return
frag_total_bytes = s.get('total_bytes', 0)
if s['status'] == 'finished':
state['downloaded_bytes'] += frag_total_bytes
state['frag_index'] += 1
frag_total_bytes = s.get('total_bytes') or 0
estimated_size = (
(state['downloaded_bytes'] + frag_total_bytes) /
(ctx['complete_frags_downloaded_bytes'] + frag_total_bytes) /
(state['frag_index'] + 1) * total_frags)
time_now = time.time()
state['total_bytes_estimate'] = estimated_size
state['elapsed'] = time_now - start
if s['status'] == 'finished':
progress = self.calc_percent(state['frag_index'], total_frags)
state['frag_index'] += 1
state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes']
ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes']
ctx['prev_frag_downloaded_bytes'] = 0
else:
frag_downloaded_bytes = s['downloaded_bytes']
frag_progress = self.calc_percent(frag_downloaded_bytes,
frag_total_bytes)
progress = self.calc_percent(state['frag_index'], total_frags)
progress += frag_progress / float(total_frags)
state['downloaded_bytes'] += frag_downloaded_bytes - ctx['prev_frag_downloaded_bytes']
state['eta'] = self.calc_eta(
start, time_now, estimated_size, state['downloaded_bytes'] + frag_downloaded_bytes)
start, time_now, estimated_size,
state['downloaded_bytes'])
state['speed'] = s.get('speed')
ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes
self._hook_progress(state)
ctx['dl'].add_progress_hook(frag_progress_hook)

View File

@@ -46,7 +46,16 @@ class HlsFD(FileDownloader):
self._debug_cmd(args)
retval = subprocess.call(args, stdin=subprocess.PIPE)
proc = subprocess.Popen(args, stdin=subprocess.PIPE)
try:
retval = proc.wait()
except KeyboardInterrupt:
# subprocces.run would send the SIGKILL signal to ffmpeg and the
# mp4 file couldn't be played, but if we ask ffmpeg to quit it
# produces a file that is playable (this is mostly useful for live
# streams)
proc.communicate(b'q')
raise
if retval == 0:
fsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen('\r[%s] %s bytes' % (args[0], fsize))

View File

@@ -15,6 +15,7 @@ from .adobetv import (
AdobeTVVideoIE,
)
from .adultswim import AdultSwimIE
from .aenetworks import AENetworksIE
from .aftonbladet import AftonbladetIE
from .airmozilla import AirMozillaIE
from .aljazeera import AlJazeeraIE
@@ -41,6 +42,7 @@ from .arte import (
ArteTVCreativeIE,
ArteTVConcertIE,
ArteTVFutureIE,
ArteTVCinemaIE,
ArteTVDDCIE,
ArteTVEmbedIE,
)
@@ -61,6 +63,7 @@ from .beeg import BeegIE
from .behindkink import BehindKinkIE
from .beatportpro import BeatportProIE
from .bet import BetIE
from .bigflix import BigflixIE
from .bild import BildIE
from .bilibili import BiliBiliIE
from .bleacherreport import (
@@ -85,6 +88,7 @@ from .camdemy import (
)
from .canalplus import CanalplusIE
from .canalc2 import Canalc2IE
from .canvas import CanvasIE
from .cbs import CBSIE
from .cbsnews import CBSNewsIE
from .cbssports import CBSSportsIE
@@ -127,6 +131,8 @@ from .crunchyroll import (
)
from .cspan import CSpanIE
from .ctsnews import CtsNewsIE
from .cultureunplugged import CultureUnpluggedIE
from .cwtv import CWTVIE
from .dailymotion import (
DailymotionIE,
DailymotionPlaylistIE,
@@ -261,7 +267,6 @@ from .hellporno import HellPornoIE
from .helsinki import HelsinkiIE
from .hentaistigma import HentaiStigmaIE
from .historicfilms import HistoricFilmsIE
from .history import HistoryIE
from .hitbox import HitboxIE, HitboxLiveIE
from .hornbunny import HornBunnyIE
from .hotnewhiphop import HotNewHipHopIE
@@ -329,10 +334,12 @@ from .kuwo import (
from .la7 import LA7IE
from .laola1tv import Laola1TvIE
from .lecture2go import Lecture2GoIE
from .lemonde import LemondeIE
from .letv import (
LetvIE,
LetvTvIE,
LetvPlaylistIE
LetvPlaylistIE,
LetvCloudIE,
)
from .libsyn import LibsynIE
from .lifenews import (
@@ -351,6 +358,7 @@ from .livestream import (
LivestreamShortenerIE,
)
from .lnkgo import LnkGoIE
from .lovehomeporn import LoveHomePornIE
from .lrt import LRTIE
from .lynda import (
LyndaIE,
@@ -573,6 +581,7 @@ from .rts import RTSIE
from .rtve import RTVEALaCartaIE, RTVELiveIE, RTVEInfantilIE
from .rtvnh import RTVNHIE
from .ruhd import RUHDIE
from .ruleporn import RulePornIE
from .rutube import (
RutubeIE,
RutubeChannelIE,
@@ -719,10 +728,15 @@ from .toutv import TouTvIE
from .toypics import ToypicsUserIE, ToypicsIE
from .traileraddict import TrailerAddictIE
from .trilulilu import TriluliluIE
from .trollvids import TrollvidsIE
from .trutube import TruTubeIE
from .tube8 import Tube8IE
from .tubitv import TubiTvIE
from .tudou import TudouIE
from .tudou import (
TudouIE,
TudouPlaylistIE,
TudouAlbumIE,
)
from .tumblr import TumblrIE
from .tunein import (
TuneInClipIE,
@@ -769,7 +783,7 @@ from .udemy import (
UdemyCourseIE
)
from .udn import UDNEmbedIE
from .ultimedia import UltimediaIE
from .digiteka import DigitekaIE
from .unistra import UnistraIE
from .urort import UrortIE
from .ustream import UstreamIE, UstreamChannelIE
@@ -848,6 +862,7 @@ from .webofstories import (
WebOfStoriesPlaylistIE,
)
from .weibo import WeiboIE
from .weiqitv import WeiqiTVIE
from .wimp import WimpIE
from .wistia import WistiaIE
from .worldstarhiphop import WorldStarHipHopIE
@@ -908,6 +923,7 @@ from .zingmp3 import (
ZingMp3SongIE,
ZingMp3AlbumIE,
)
from .zippcast import ZippCastIE
_ALL_CLASSES = [
klass

View File

@@ -0,0 +1,66 @@
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import smuggle_url
class AENetworksIE(InfoExtractor):
IE_NAME = 'aenetworks'
IE_DESC = 'A+E Networks: A&E, Lifetime, History.com, FYI Network'
_VALID_URL = r'https?://(?:www\.)?(?:(?:history|aetv|mylifetime)\.com|fyi\.tv)/(?:[^/]+/)+(?P<id>[^/]+?)(?:$|[?#])'
_TESTS = [{
'url': 'http://www.history.com/topics/valentines-day/history-of-valentines-day/videos/bet-you-didnt-know-valentines-day?m=528e394da93ae&s=undefined&f=1&free=false',
'info_dict': {
'id': 'g12m5Gyt3fdR',
'ext': 'mp4',
'title': "Bet You Didn't Know: Valentine's Day",
'description': 'md5:7b57ea4829b391995b405fa60bd7b5f7',
},
'params': {
# m3u8 download
'skip_download': True,
},
'add_ie': ['ThePlatform'],
'expected_warnings': ['JSON-LD'],
}, {
'url': 'http://www.history.com/shows/mountain-men/season-1/episode-1',
'info_dict': {
'id': 'eg47EERs_JsZ',
'ext': 'mp4',
'title': "Winter Is Coming",
'description': 'md5:641f424b7a19d8e24f26dea22cf59d74',
},
'params': {
# m3u8 download
'skip_download': True,
},
'add_ie': ['ThePlatform'],
}, {
'url': 'http://www.aetv.com/shows/duck-dynasty/video/inlawful-entry',
'only_matching': True
}, {
'url': 'http://www.fyi.tv/shows/tiny-house-nation/videos/207-sq-ft-minnesota-prairie-cottage',
'only_matching': True
}, {
'url': 'http://www.mylifetime.com/shows/project-runway-junior/video/season-1/episode-6/superstar-clients',
'only_matching': True
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_url_re = [
r'data-href="[^"]*/%s"[^>]+data-release-url="([^"]+)"' % video_id,
r"media_url\s*=\s*'([^']+)'"
]
video_url = self._search_regex(video_url_re, webpage, 'video url')
info = self._search_json_ld(webpage, video_id, fatal=False)
info.update({
'_type': 'url_transparent',
'url': smuggle_url(video_url, {'sig': {'key': 'crazyjava', 'secret': 's3cr3t'}}),
})
return info

View File

@@ -1,11 +1,9 @@
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from .nuevo import NuevoBaseIE
class AnitubeIE(InfoExtractor):
class AnitubeIE(NuevoBaseIE):
IE_NAME = 'anitube.se'
_VALID_URL = r'https?://(?:www\.)?anitube\.se/video/(?P<id>\d+)'
@@ -22,38 +20,11 @@ class AnitubeIE(InfoExtractor):
}
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
key = self._search_regex(
r'src=["\']https?://[^/]+/embed/([A-Za-z0-9_-]+)', webpage, 'key')
config_xml = self._download_xml(
'http://www.anitube.se/nuevo/econfig.php?key=%s' % key, key)
video_title = config_xml.find('title').text
thumbnail = config_xml.find('image').text
duration = float(config_xml.find('duration').text)
formats = []
video_url = config_xml.find('file')
if video_url is not None:
formats.append({
'format_id': 'sd',
'url': video_url.text,
})
video_url = config_xml.find('filehd')
if video_url is not None:
formats.append({
'format_id': 'hd',
'url': video_url.text,
})
return {
'id': video_id,
'title': video_title,
'thumbnail': thumbnail,
'duration': duration,
'formats': formats
}
return self._extract_nuevo(
'http://www.anitube.se/nuevo/econfig.php?key=%s' % key, video_id)

View File

@@ -199,25 +199,19 @@ class ArteTVCreativeIE(ArteTVPlus7IE):
class ArteTVFutureIE(ArteTVPlus7IE):
IE_NAME = 'arte.tv:future'
_VALID_URL = r'https?://future\.arte\.tv/(?P<lang>fr|de)/(thema|sujet)/.*?#article-anchor-(?P<id>\d+)'
_VALID_URL = r'https?://future\.arte\.tv/(?P<lang>fr|de)/(?P<id>.+)'
_TEST = {
'url': 'http://future.arte.tv/fr/sujet/info-sciences#article-anchor-7081',
_TESTS = [{
'url': 'http://future.arte.tv/fr/info-sciences/les-ecrevisses-aussi-sont-anxieuses',
'info_dict': {
'id': '5201',
'id': '050940-028-A',
'ext': 'mp4',
'title': 'Les champignons au secours de la planète',
'upload_date': '20131101',
'title': 'Les écrevisses aussi peuvent être anxieuses',
},
}
def _real_extract(self, url):
anchor_id, lang = self._extract_url_info(url)
webpage = self._download_webpage(url, anchor_id)
row = self._search_regex(
r'(?s)id="%s"[^>]*>.+?(<div[^>]*arte_vp_url[^>]*>)' % anchor_id,
webpage, 'row')
return self._extract_from_webpage(row, anchor_id, lang)
}, {
'url': 'http://future.arte.tv/fr/la-science-est-elle-responsable',
'only_matching': True,
}]
class ArteTVDDCIE(ArteTVPlus7IE):
@@ -255,6 +249,23 @@ class ArteTVConcertIE(ArteTVPlus7IE):
}
class ArteTVCinemaIE(ArteTVPlus7IE):
IE_NAME = 'arte.tv:cinema'
_VALID_URL = r'https?://cinema\.arte\.tv/(?P<lang>de|fr)/(?P<id>.+)'
_TEST = {
'url': 'http://cinema.arte.tv/de/node/38291',
'md5': '6b275511a5107c60bacbeeda368c3aa1',
'info_dict': {
'id': '055876-000_PWA12025-D',
'ext': 'mp4',
'title': 'Tod auf dem Nil',
'upload_date': '20160122',
'description': 'md5:7f749bbb77d800ef2be11d54529b96bc',
},
}
class ArteTVEmbedIE(ArteTVPlus7IE):
IE_NAME = 'arte.tv:embed'
_VALID_URL = r'''(?x)

View File

@@ -132,11 +132,6 @@ class AtresPlayerIE(InfoExtractor):
})
formats.append(format_info)
m3u8_url = player.get('urlVideoHls')
if m3u8_url:
formats.extend(self._extract_m3u8_formats(
m3u8_url, episode_id, 'mp4', 'm3u8_native', m3u8_id='hls', fatal=False))
timestamp = int_or_none(self._download_webpage(
self._TIME_API_URL,
video_id, 'Downloading timestamp', fatal=False), 1000, time.time())

View File

@@ -718,19 +718,10 @@ class BBCIE(BBCCoUkIE):
webpage = self._download_webpage(url, playlist_id)
timestamp = None
playlist_title = None
playlist_description = None
ld = self._parse_json(
self._search_regex(
r'(?s)<script type="application/ld\+json">(.+?)</script>',
webpage, 'ld json', default='{}'),
playlist_id, fatal=False)
if ld:
timestamp = parse_iso8601(ld.get('datePublished'))
playlist_title = ld.get('headline')
playlist_description = ld.get('articleBody')
json_ld_info = self._search_json_ld(webpage, playlist_id, default=None)
timestamp = json_ld_info.get('timestamp')
playlist_title = json_ld_info.get('title')
playlist_description = json_ld_info.get('description')
if not timestamp:
timestamp = parse_iso8601(self._search_regex(

View File

@@ -34,7 +34,7 @@ class BeegIE(InfoExtractor):
video_id = self._match_id(url)
video = self._download_json(
'http://beeg.com/api/v5/video/%s' % video_id, video_id)
'https://api.beeg.com/api/v5/video/%s' % video_id, video_id)
def split(o, e):
def cut(s, x):
@@ -60,7 +60,7 @@ class BeegIE(InfoExtractor):
def decrypt_url(encrypted_url):
encrypted_url = self._proto_relative_url(
encrypted_url.replace('{DATA_MARKERS}', ''), 'http:')
encrypted_url.replace('{DATA_MARKERS}', ''), 'https:')
key = self._search_regex(
r'/key=(.*?)%2Cend=', encrypted_url, 'key', default=None)
if not key:

View File

@@ -0,0 +1,85 @@
# coding: utf-8
from __future__ import unicode_literals
import base64
import re
from .common import InfoExtractor
from ..compat import compat_urllib_parse_unquote
class BigflixIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?bigflix\.com/.+/(?P<id>[0-9]+)'
_TESTS = [{
'url': 'http://www.bigflix.com/Hindi-movies/Action-movies/Singham-Returns/16537',
'md5': 'ec76aa9b1129e2e5b301a474e54fab74',
'info_dict': {
'id': '16537',
'ext': 'mp4',
'title': 'Singham Returns',
'description': 'md5:3d2ba5815f14911d5cc6a501ae0cf65d',
}
}, {
# 2 formats
'url': 'http://www.bigflix.com/Tamil-movies/Drama-movies/Madarasapatinam/16070',
'info_dict': {
'id': '16070',
'ext': 'mp4',
'title': 'Madarasapatinam',
'description': 'md5:63b9b8ed79189c6f0418c26d9a3452ca',
'formats': 'mincount:2',
},
'params': {
'skip_download': True,
}
}, {
# multiple formats
'url': 'http://www.bigflix.com/Malayalam-movies/Drama-movies/Indian-Rupee/15967',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
title = self._html_search_regex(
r'<div[^>]+class=["\']pagetitle["\'][^>]*>(.+?)</div>',
webpage, 'title')
def decode_url(quoted_b64_url):
return base64.b64decode(compat_urllib_parse_unquote(
quoted_b64_url).encode('ascii')).decode('utf-8')
formats = []
for height, encoded_url in re.findall(
r'ContentURL_(\d{3,4})[pP][^=]+=([^&]+)', webpage):
video_url = decode_url(encoded_url)
f = {
'url': video_url,
'format_id': '%sp' % height,
'height': int(height),
}
if video_url.startswith('rtmp'):
f['ext'] = 'flv'
formats.append(f)
file_url = self._search_regex(
r'file=([^&]+)', webpage, 'video url', default=None)
if file_url:
video_url = decode_url(file_url)
if all(f['url'] != video_url for f in formats):
formats.append({
'url': decode_url(file_url),
})
self._sort_formats(formats)
description = self._html_search_meta('description', webpage)
return {
'id': video_id,
'title': title,
'description': description,
'formats': formats
}

View File

@@ -9,9 +9,9 @@ from ..utils import parse_duration
class Canalc2IE(InfoExtractor):
IE_NAME = 'canalc2.tv'
_VALID_URL = r'https?://(?:www\.)?canalc2\.tv/video/(?P<id>\d+)'
_VALID_URL = r'https?://(?:(?:www\.)?canalc2\.tv/video/|archives-canalc2\.u-strasbg\.fr/video\.asp\?.*\bidVideo=)(?P<id>\d+)'
_TEST = {
_TESTS = [{
'url': 'http://www.canalc2.tv/video/12163',
'md5': '060158428b650f896c542dfbb3d6487f',
'info_dict': {
@@ -23,24 +23,36 @@ class Canalc2IE(InfoExtractor):
'params': {
'skip_download': True, # Requires rtmpdump
}
}
}, {
'url': 'http://archives-canalc2.u-strasbg.fr/video.asp?idVideo=11427&voir=oui',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_url = self._search_regex(
r'jwplayer\((["\'])Player\1\)\.setup\({[^}]*file\s*:\s*(["\'])(?P<file>.+?)\2',
webpage, 'video_url', group='file')
formats = [{'url': video_url}]
if video_url.startswith('rtmp://'):
rtmp = re.search(r'^(?P<url>rtmp://[^/]+/(?P<app>.+/))(?P<play_path>mp4:.+)$', video_url)
formats[0].update({
'url': rtmp.group('url'),
'ext': 'flv',
'app': rtmp.group('app'),
'play_path': rtmp.group('play_path'),
'page_url': url,
})
webpage = self._download_webpage(
'http://www.canalc2.tv/video/%s' % video_id, video_id)
formats = []
for _, video_url in re.findall(r'file\s*=\s*(["\'])(.+?)\1', webpage):
if video_url.startswith('rtmp://'):
rtmp = re.search(
r'^(?P<url>rtmp://[^/]+/(?P<app>.+/))(?P<play_path>mp4:.+)$', video_url)
formats.append({
'url': rtmp.group('url'),
'format_id': 'rtmp',
'ext': 'flv',
'app': rtmp.group('app'),
'play_path': rtmp.group('play_path'),
'page_url': url,
})
else:
formats.append({
'url': video_url,
'format_id': 'http',
})
self._sort_formats(formats)
title = self._html_search_regex(
r'(?s)class="[^"]*col_description[^"]*">.*?<h3>(.*?)</h3>', webpage, 'title')

View File

@@ -0,0 +1,65 @@
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import float_or_none
class CanvasIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?canvas\.be/video/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TEST = {
'url': 'http://www.canvas.be/video/de-afspraak/najaar-2015/de-afspraak-veilt-voor-de-warmste-week',
'md5': 'ea838375a547ac787d4064d8c7860a6c',
'info_dict': {
'id': 'mz-ast-5e5f90b6-2d72-4c40-82c2-e134f884e93e',
'display_id': 'de-afspraak-veilt-voor-de-warmste-week',
'ext': 'mp4',
'title': 'De afspraak veilt voor de Warmste Week',
'description': 'md5:24cb860c320dc2be7358e0e5aa317ba6',
'thumbnail': 're:^https?://.*\.jpg$',
'duration': 49.02,
}
}
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
title = self._search_regex(
r'<h1[^>]+class="video__body__header__title"[^>]*>(.+?)</h1>',
webpage, 'title', default=None) or self._og_search_title(webpage)
video_id = self._html_search_regex(
r'data-video=(["\'])(?P<id>.+?)\1', webpage, 'video id', group='id')
data = self._download_json(
'https://mediazone.vrt.be/api/v1/canvas/assets/%s' % video_id, display_id)
formats = []
for target in data['targetUrls']:
format_url, format_type = target.get('url'), target.get('type')
if not format_url or not format_type:
continue
if format_type == 'HLS':
formats.extend(self._extract_m3u8_formats(
format_url, display_id, entry_protocol='m3u8_native',
ext='mp4', preference=0, fatal=False, m3u8_id=format_type))
elif format_type == 'HDS':
formats.extend(self._extract_f4m_formats(
format_url, display_id, f4m_id=format_type, fatal=False))
else:
formats.append({
'format_id': format_type,
'url': format_url,
})
self._sort_formats(formats)
return {
'id': video_id,
'display_id': display_id,
'title': title,
'description': self._og_search_description(webpage),
'formats': formats,
'duration': float_or_none(data.get('duration'), 1000),
'thumbnail': data.get('posterImageUrl'),
}

View File

@@ -35,6 +35,11 @@ class CBSNewsIE(InfoExtractor):
'title': 'Fort Hood shooting: Army downplays mental illness as cause of attack',
'thumbnail': 're:^https?://.*\.jpg$',
'duration': 205,
'subtitles': {
'en': [{
'ext': 'ttml',
}],
},
},
'params': {
# rtmp download
@@ -85,10 +90,18 @@ class CBSNewsIE(InfoExtractor):
fmt['ext'] = 'mp4'
formats.append(fmt)
subtitles = {}
if 'mpxRefId' in video_info:
subtitles['en'] = [{
'ext': 'ttml',
'url': 'http://www.cbsnews.com/videos/captions/%s.adb_xml' % video_info['mpxRefId'],
}]
return {
'id': video_id,
'title': title,
'thumbnail': thumbnail,
'duration': duration,
'formats': formats,
'subtitles': subtitles,
}

View File

@@ -34,6 +34,7 @@ from ..utils import (
fix_xml_ampersands,
float_or_none,
int_or_none,
parse_iso8601,
RegexNotFoundError,
sanitize_filename,
sanitized_Request,
@@ -313,9 +314,9 @@ class InfoExtractor(object):
except ExtractorError:
raise
except compat_http_client.IncompleteRead as e:
raise ExtractorError('A network error has occured.', cause=e, expected=True)
raise ExtractorError('A network error has occurred.', cause=e, expected=True)
except (KeyError, StopIteration) as e:
raise ExtractorError('An extractor error has occured.', cause=e)
raise ExtractorError('An extractor error has occurred.', cause=e)
def set_downloader(self, downloader):
"""Sets the downloader for this IE."""
@@ -762,6 +763,42 @@ class InfoExtractor(object):
return self._html_search_meta('twitter:player', html,
'twitter card player')
def _search_json_ld(self, html, video_id, **kwargs):
json_ld = self._search_regex(
r'(?s)<script[^>]+type=(["\'])application/ld\+json\1[^>]*>(?P<json_ld>.+?)</script>',
html, 'JSON-LD', group='json_ld', **kwargs)
if not json_ld:
return {}
return self._json_ld(json_ld, video_id, fatal=kwargs.get('fatal', True))
def _json_ld(self, json_ld, video_id, fatal=True):
if isinstance(json_ld, compat_str):
json_ld = self._parse_json(json_ld, video_id, fatal=fatal)
if not json_ld:
return {}
info = {}
if json_ld.get('@context') == 'http://schema.org':
item_type = json_ld.get('@type')
if item_type == 'TVEpisode':
info.update({
'episode': unescapeHTML(json_ld.get('name')),
'episode_number': int_or_none(json_ld.get('episodeNumber')),
'description': unescapeHTML(json_ld.get('description')),
})
part_of_season = json_ld.get('partOfSeason')
if isinstance(part_of_season, dict) and part_of_season.get('@type') == 'TVSeason':
info['season_number'] = int_or_none(part_of_season.get('seasonNumber'))
part_of_series = json_ld.get('partOfSeries')
if isinstance(part_of_series, dict) and part_of_series.get('@type') == 'TVSeries':
info['series'] = unescapeHTML(part_of_series.get('name'))
elif item_type == 'Article':
info.update({
'timestamp': parse_iso8601(json_ld.get('datePublished')),
'title': unescapeHTML(json_ld.get('headline')),
'description': unescapeHTML(json_ld.get('articleBody')),
})
return dict((k, v) for k, v in info.items() if v is not None)
@staticmethod
def _hidden_inputs(html):
html = re.sub(r'<!--(?:(?!<!--).)*-->', '', html)

View File

@@ -0,0 +1,63 @@
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..utils import int_or_none
class CultureUnpluggedIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?cultureunplugged\.com/documentary/watch-online/play/(?P<id>\d+)(?:/(?P<display_id>[^/]+))?'
_TESTS = [{
'url': 'http://www.cultureunplugged.com/documentary/watch-online/play/53662/The-Next--Best-West',
'md5': 'ac6c093b089f7d05e79934dcb3d228fc',
'info_dict': {
'id': '53662',
'display_id': 'The-Next--Best-West',
'ext': 'mp4',
'title': 'The Next, Best West',
'description': 'md5:0423cd00833dea1519cf014e9d0903b1',
'thumbnail': 're:^https?://.*\.jpg$',
'creator': 'Coldstream Creative',
'duration': 2203,
'view_count': int,
}
}, {
'url': 'http://www.cultureunplugged.com/documentary/watch-online/play/53662',
'only_matching': True,
}]
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
display_id = mobj.group('display_id') or video_id
movie_data = self._download_json(
'http://www.cultureunplugged.com/movie-data/cu-%s.json' % video_id, display_id)
video_url = movie_data['url']
title = movie_data['title']
description = movie_data.get('synopsis')
creator = movie_data.get('producer')
duration = int_or_none(movie_data.get('duration'))
view_count = int_or_none(movie_data.get('views'))
thumbnails = [{
'url': movie_data['%s_thumb' % size],
'id': size,
'preference': preference,
} for preference, size in enumerate((
'small', 'large')) if movie_data.get('%s_thumb' % size)]
return {
'id': video_id,
'display_id': display_id,
'url': video_url,
'title': title,
'description': description,
'creator': creator,
'duration': duration,
'view_count': view_count,
'thumbnails': thumbnails,
}

View File

@@ -0,0 +1,88 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
int_or_none,
parse_iso8601,
)
class CWTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?cw(?:tv|seed)\.com/shows/(?:[^/]+/){2}\?play=(?P<id>[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12})'
_TESTS = [{
'url': 'http://cwtv.com/shows/arrow/legends-of-yesterday/?play=6b15e985-9345-4f60-baf8-56e96be57c63',
'info_dict': {
'id': '6b15e985-9345-4f60-baf8-56e96be57c63',
'ext': 'mp4',
'title': 'Legends of Yesterday',
'description': 'Oliver and Barry Allen take Kendra Saunders and Carter Hall to a remote location to keep them hidden from Vandal Savage while they figure out how to defeat him.',
'duration': 2665,
'series': 'Arrow',
'season_number': 4,
'season': '4',
'episode_number': 8,
'upload_date': '20151203',
'timestamp': 1449122100,
},
'params': {
# m3u8 download
'skip_download': True,
}
}, {
'url': 'http://www.cwseed.com/shows/whose-line-is-it-anyway/jeff-davis-4/?play=24282b12-ead2-42f2-95ad-26770c2c6088',
'info_dict': {
'id': '24282b12-ead2-42f2-95ad-26770c2c6088',
'ext': 'mp4',
'title': 'Jeff Davis 4',
'description': 'Jeff Davis is back to make you laugh.',
'duration': 1263,
'series': 'Whose Line Is It Anyway?',
'season_number': 11,
'season': '11',
'episode_number': 20,
'upload_date': '20151006',
'timestamp': 1444107300,
},
'params': {
# m3u8 download
'skip_download': True,
}
}]
def _real_extract(self, url):
video_id = self._match_id(url)
video_data = self._download_json(
'http://metaframe.digitalsmiths.tv/v2/CWtv/assets/%s/partner/132?format=json' % video_id, video_id)
formats = self._extract_m3u8_formats(
video_data['videos']['variantplaylist']['uri'], video_id, 'mp4')
thumbnails = [{
'url': image['uri'],
'width': image.get('width'),
'height': image.get('height'),
} for image_id, image in video_data['images'].items() if image.get('uri')] if video_data.get('images') else None
video_metadata = video_data['assetFields']
subtitles = {
'en': [{
'url': video_metadata['UnicornCcUrl'],
}],
} if video_metadata.get('UnicornCcUrl') else None
return {
'id': video_id,
'title': video_metadata['title'],
'description': video_metadata.get('description'),
'duration': int_or_none(video_metadata.get('duration')),
'series': video_metadata.get('seriesName'),
'season_number': int_or_none(video_metadata.get('seasonNumber')),
'season': video_metadata.get('seasonName'),
'episode_number': int_or_none(video_metadata.get('episodeNumber')),
'timestamp': parse_iso8601(video_data.get('startTime')),
'thumbnails': thumbnails,
'formats': formats,
'subtitles': subtitles,
}

View File

@@ -37,7 +37,7 @@ class DailymotionBaseInfoExtractor(InfoExtractor):
class DailymotionIE(DailymotionBaseInfoExtractor):
_VALID_URL = r'(?i)(?:https?://)?(?:(www|touch)\.)?dailymotion\.[a-z]{2,3}/(?:(embed|#)/)?video/(?P<id>[^/?_]+)'
_VALID_URL = r'(?i)(?:https?://)?(?:(www|touch)\.)?dailymotion\.[a-z]{2,3}/(?:(?:embed|swf|#)/)?video/(?P<id>[^/?_]+)'
IE_NAME = 'dailymotion'
_FORMATS = [
@@ -104,6 +104,10 @@ class DailymotionIE(DailymotionBaseInfoExtractor):
{
'url': 'http://www.dailymotion.com/video/x20su5f_the-power-of-nightmares-1-the-rise-of-the-politics-of-fear-bbc-2004_news',
'only_matching': True,
},
{
'url': 'http://www.dailymotion.com/swf/video/x3n92nf',
'only_matching': True,
}
]
@@ -149,14 +153,15 @@ class DailymotionIE(DailymotionBaseInfoExtractor):
ext = determine_ext(media_url)
if type_ == 'application/x-mpegURL' or ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
media_url, video_id, 'mp4', m3u8_id='hls', fatal=False))
media_url, video_id, 'mp4', preference=-1,
m3u8_id='hls', fatal=False))
elif type_ == 'application/f4m' or ext == 'f4m':
formats.extend(self._extract_f4m_formats(
media_url, video_id, preference=-1, f4m_id='hds', fatal=False))
else:
f = {
'url': media_url,
'format_id': quality,
'format_id': 'http-%s' % quality,
}
m = re.search(r'H264-(?P<width>\d+)x(?P<height>\d+)', media_url)
if m:
@@ -335,7 +340,7 @@ class DailymotionPlaylistIE(DailymotionBaseInfoExtractor):
class DailymotionUserIE(DailymotionPlaylistIE):
IE_NAME = 'dailymotion:user'
_VALID_URL = r'https?://(?:www\.)?dailymotion\.[a-z]{2,3}/(?!(?:embed|#|video|playlist)/)(?:(?:old/)?user/)?(?P<user>[^/]+)'
_VALID_URL = r'https?://(?:www\.)?dailymotion\.[a-z]{2,3}/(?!(?:embed|swf|#|video|playlist)/)(?:(?:old/)?user/)?(?P<user>[^/]+)'
_PAGE_TEMPLATE = 'http://www.dailymotion.com/user/%s/%s'
_TESTS = [{
'url': 'https://www.dailymotion.com/user/nqtv',

View File

@@ -5,7 +5,10 @@ import re
import base64
from .common import InfoExtractor
from ..compat import compat_urllib_parse
from ..compat import (
compat_urllib_parse,
compat_str,
)
from ..utils import (
int_or_none,
parse_iso8601,
@@ -186,7 +189,8 @@ class DCNSeasonIE(InfoExtractor):
entries = []
for video in show['videos']:
video_id = compat_str(video['id'])
entries.append(self.url_result(
'http://www.dcndigital.ae/media/%s' % video['id'], 'DCNVideo'))
'http://www.dcndigital.ae/media/%s' % video_id, 'DCNVideo', video_id))
return self.playlist_result(entries, season_id, title)

View File

@@ -7,9 +7,9 @@ from .common import InfoExtractor
from ..utils import int_or_none
class UltimediaIE(InfoExtractor):
class DigitekaIE(InfoExtractor):
_VALID_URL = r'''(?x)
https?://(?:www\.)?ultimedia\.com/
https?://(?:www\.)?(?:digiteka\.net|ultimedia\.com)/
(?:
deliver/
(?P<embed_type>
@@ -56,6 +56,9 @@ class UltimediaIE(InfoExtractor):
'timestamp': 1424760500,
'uploader_id': '3rfzk',
},
}, {
'url': 'https://www.digiteka.net/deliver/generic/iframe/mdtk/01637594/src/lqm3kl/zone/1/showtitle/1/autoplay/yes',
'only_matching': True,
}]
@staticmethod

View File

@@ -91,7 +91,7 @@ class DRTVIE(InfoExtractor):
subtitles_list = asset.get('SubtitlesList')
if isinstance(subtitles_list, list):
LANGS = {
'Danish': 'dk',
'Danish': 'da',
}
for subs in subtitles_list:
lang = subs['Language']

View File

@@ -105,7 +105,7 @@ class FacebookIE(InfoExtractor):
login_results, 'login error', default=None, group='error')
if error:
raise ExtractorError('Unable to login: %s' % error, expected=True)
self._downloader.report_warning('unable to log in: bad username/password, or exceded login rate limit (~3/min). Check credentials or wait.')
self._downloader.report_warning('unable to log in: bad username/password, or exceeded login rate limit (~3/min). Check credentials or wait.')
return
fb_dtsg = self._search_regex(
@@ -126,7 +126,7 @@ class FacebookIE(InfoExtractor):
check_response = self._download_webpage(check_req, None,
note='Confirming login')
if re.search(r'id="checkpointSubmitButton"', check_response) is not None:
self._downloader.report_warning('Unable to confirm login, you have to login in your brower and authorize the login.')
self._downloader.report_warning('Unable to confirm login, you have to login in your browser and authorize the login.')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.report_warning('unable to log in: %s' % error_to_compat_str(err))
return

View File

@@ -57,7 +57,7 @@ from .pladform import PladformIE
from .videomore import VideomoreIE
from .googledrive import GoogleDriveIE
from .jwplatform import JWPlatformIE
from .ultimedia import UltimediaIE
from .digiteka import DigitekaIE
class GenericIE(InfoExtractor):
@@ -487,7 +487,7 @@ class GenericIE(InfoExtractor):
'description': 'md5:8145d19d320ff3e52f28401f4c4283b9',
}
},
# Embeded Ustream video
# Embedded Ustream video
{
'url': 'http://www.american.edu/spa/pti/nsa-privacy-janus-2014.cfm',
'md5': '27b99cdb639c9b12a79bca876a073417',
@@ -1402,7 +1402,7 @@ class GenericIE(InfoExtractor):
# Look for embedded Dailymotion player
matches = re.findall(
r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?dailymotion\.com/embed/video/.+?)\1', webpage)
r'<(?:embed|iframe)[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?dailymotion\.com/(?:embed|swf)/video/.+?)\1', webpage)
if matches:
return _playlist_from_matches(
matches, lambda m: unescapeHTML(m[1]))
@@ -1644,7 +1644,7 @@ class GenericIE(InfoExtractor):
if myvi_url:
return self.url_result(myvi_url)
# Look for embeded soundcloud player
# Look for embedded soundcloud player
mobj = re.search(
r'<iframe\s+(?:[a-zA-Z0-9_-]+="[^"]+"\s+)*src="(?P<url>https?://(?:w\.)?soundcloud\.com/player[^"]+)"',
webpage)
@@ -1814,10 +1814,10 @@ class GenericIE(InfoExtractor):
if mobj is not None:
return self.url_result(unescapeHTML(mobj.group('url')), 'ScreenwaveMedia')
# Look for Ulltimedia embeds
ultimedia_url = UltimediaIE._extract_url(webpage)
if ultimedia_url:
return self.url_result(self._proto_relative_url(ultimedia_url), 'Ultimedia')
# Look for Digiteka embeds
digiteka_url = DigitekaIE._extract_url(webpage)
if digiteka_url:
return self.url_result(self._proto_relative_url(digiteka_url), DigitekaIE.ie_key())
# Look for AdobeTVVideo embeds
mobj = re.search(

View File

@@ -1,31 +0,0 @@
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import smuggle_url
class HistoryIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?history\.com/(?:[^/]+/)+(?P<id>[^/]+?)(?:$|[?#])'
_TESTS = [{
'url': 'http://www.history.com/topics/valentines-day/history-of-valentines-day/videos/bet-you-didnt-know-valentines-day?m=528e394da93ae&s=undefined&f=1&free=false',
'md5': '6fe632d033c92aa10b8d4a9be047a7c5',
'info_dict': {
'id': 'bLx5Dv5Aka1G',
'ext': 'mp4',
'title': "Bet You Didn't Know: Valentine's Day",
'description': 'md5:7b57ea4829b391995b405fa60bd7b5f7',
},
'add_ie': ['ThePlatform'],
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_url = self._search_regex(
r'data-href="[^"]*/%s"[^>]+data-release-url="([^"]+)"' % video_id,
webpage, 'video url')
return self.url_result(smuggle_url(video_url, {'sig': {'key': 'crazyjava', 'secret': 's3cr3t'}}))

View File

@@ -159,6 +159,9 @@ class HitboxLiveIE(HitboxIE):
cdns = player_config.get('cdns')
servers = []
for cdn in cdns:
# Subscribe URLs are not playable
if cdn.get('rtmpSubscribe') is True:
continue
base_url = cdn.get('netConnectionUrl')
host = re.search('.+\.([^\.]+\.[^\./]+)/.+', base_url).group(1)
if base_url not in servers:

View File

@@ -14,6 +14,7 @@ from ..utils import (
class IPrimaIE(InfoExtractor):
_WORKING = False
_VALID_URL = r'https?://play\.iprima\.cz/(?:[^/]+/)*(?P<id>[^?#]+)'
_TESTS = [{

View File

@@ -214,8 +214,8 @@ class IqiyiIE(InfoExtractor):
def get_enc_key(self, swf_url, video_id):
# TODO: automatic key extraction
# last update at 2015-12-18 for Zombie::bite
enc_key = '8b6b683780897eb8d9a48a02ccc4817d'[::-1]
# last update at 2016-01-22 for Zombie::bite
enc_key = '6ab6d0280511493ba85594779759d4ed'
return enc_key
def _real_extract(self, url):

View File

@@ -32,7 +32,7 @@ class IviIE(InfoExtractor):
},
'skip': 'Only works from Russia',
},
# Serial's serie
# Serial's series
{
'url': 'http://www.ivi.ru/watch/dvoe_iz_lartsa/9549',
'md5': '221f56b35e3ed815fde2df71032f4b3e',

View File

@@ -49,7 +49,7 @@ class KanalPlayIE(InfoExtractor):
subs = self._download_json(
'http://www.kanal%splay.se/api/subtitles/%s' % (channel_id, video_id),
video_id, 'Downloading subtitles JSON', fatal=False)
return {'se': [{'ext': 'srt', 'data': self._fix_subtitles(subs)}]} if subs else {}
return {'sv': [{'ext': 'srt', 'data': self._fix_subtitles(subs)}]} if subs else {}
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)

View File

@@ -0,0 +1,34 @@
from __future__ import unicode_literals
from .common import InfoExtractor
class LemondeIE(InfoExtractor):
_VALID_URL = r'https?://(?:.+?\.)?lemonde\.fr/(?:[^/]+/)*(?P<id>[^/]+)\.html'
_TESTS = [{
'url': 'http://www.lemonde.fr/police-justice/video/2016/01/19/comprendre-l-affaire-bygmalion-en-cinq-minutes_4849702_1653578.html',
'md5': '01fb3c92de4c12c573343d63e163d302',
'info_dict': {
'id': 'lqm3kl',
'ext': 'mp4',
'title': "Comprendre l'affaire Bygmalion en 5 minutes",
'thumbnail': 're:^https?://.*\.jpg',
'duration': 320,
'upload_date': '20160119',
'timestamp': 1453194778,
'uploader_id': '3pmkp',
},
}, {
'url': 'http://redaction.actu.lemonde.fr/societe/video/2016/01/18/calais-debut-des-travaux-de-defrichement-dans-la-jungle_4849233_3224.html',
'only_matching': True,
}]
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
digiteka_url = self._proto_relative_url(self._search_regex(
r'url\s*:\s*(["\'])(?P<url>(?:https?://)?//(?:www\.)?(?:digiteka\.net|ultimedia\.com)/deliver/.+?)\1',
webpage, 'digiteka url', group='url'))
return self.url_result(digiteka_url, 'Digiteka')

View File

@@ -4,6 +4,7 @@ from __future__ import unicode_literals
import datetime
import re
import time
import base64
from .common import InfoExtractor
from ..compat import (
@@ -16,7 +17,9 @@ from ..utils import (
parse_iso8601,
sanitized_Request,
int_or_none,
str_or_none,
encode_data_uri,
url_basename,
)
@@ -239,3 +242,80 @@ class LetvPlaylistIE(LetvTvIE):
},
'playlist_mincount': 7
}]
class LetvCloudIE(InfoExtractor):
IE_DESC = '乐视云'
_VALID_URL = r'https?://yuntv\.letv\.com/bcloud.html\?.+'
_TESTS = [{
'url': 'http://yuntv.letv.com/bcloud.html?uu=p7jnfw5hw9&vu=467623dedf',
'md5': '26450599afd64c513bc77030ad15db44',
'info_dict': {
'id': 'p7jnfw5hw9_467623dedf',
'ext': 'mp4',
'title': 'Video p7jnfw5hw9_467623dedf',
},
}, {
'url': 'http://yuntv.letv.com/bcloud.html?uu=p7jnfw5hw9&vu=ec93197892&pu=2c7cd40209&auto_play=1&gpcflag=1&width=640&height=360',
'info_dict': {
'id': 'p7jnfw5hw9_ec93197892',
'ext': 'mp4',
'title': 'Video p7jnfw5hw9_ec93197892',
},
}, {
'url': 'http://yuntv.letv.com/bcloud.html?uu=p7jnfw5hw9&vu=187060b6fd',
'info_dict': {
'id': 'p7jnfw5hw9_187060b6fd',
'ext': 'mp4',
'title': 'Video p7jnfw5hw9_187060b6fd',
},
}]
def _real_extract(self, url):
uu_mobj = re.search('uu=([\w]+)', url)
vu_mobj = re.search('vu=([\w]+)', url)
if not uu_mobj or not vu_mobj:
raise ExtractorError('Invalid URL: %s' % url, expected=True)
uu = uu_mobj.group(1)
vu = vu_mobj.group(1)
media_id = uu + '_' + vu
play_json_req = sanitized_Request(
'http://api.letvcloud.com/gpc.php?cf=html5&sign=signxxxxx&ver=2.2&format=json&' +
'uu=' + uu + '&vu=' + vu)
play_json = self._download_json(play_json_req, media_id, 'Downloading playJson data')
if not play_json.get('data'):
if play_json.get('message'):
raise ExtractorError('Letv cloud said: %s' % play_json['message'], expected=True)
elif play_json.get('code'):
raise ExtractorError('Letv cloud returned error %d' % play_json['code'], expected=True)
else:
raise ExtractorError('Letv cloud returned an unknwon error')
def b64decode(s):
return base64.b64decode(s.encode('utf-8')).decode('utf-8')
formats = []
for media in play_json['data']['video_info']['media'].values():
play_url = media['play_url']
url = b64decode(play_url['main_url'])
decoded_url = b64decode(url_basename(url))
formats.append({
'url': url,
'ext': determine_ext(decoded_url),
'format_id': int_or_none(play_url.get('vtype')),
'format_note': str_or_none(play_url.get('definition')),
'width': int_or_none(play_url.get('vwidth')),
'height': int_or_none(play_url.get('vheight')),
})
self._sort_formats(formats)
return {
'id': media_id,
'title': 'Video %s' % media_id,
'formats': formats,
}

View File

@@ -0,0 +1,37 @@
from __future__ import unicode_literals
import re
from .nuevo import NuevoBaseIE
class LoveHomePornIE(NuevoBaseIE):
_VALID_URL = r'https?://(?:www\.)?lovehomeporn\.com/video/(?P<id>\d+)(?:/(?P<display_id>[^/?#&]+))?'
_TEST = {
'url': 'http://lovehomeporn.com/video/48483/stunning-busty-brunette-girlfriend-sucking-and-riding-a-big-dick#menu',
'info_dict': {
'id': '48483',
'display_id': 'stunning-busty-brunette-girlfriend-sucking-and-riding-a-big-dick',
'ext': 'mp4',
'title': 'Stunning busty brunette girlfriend sucking and riding a big dick',
'age_limit': 18,
'duration': 238.47,
},
'params': {
'skip_download': True,
}
}
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
display_id = mobj.group('display_id')
info = self._extract_nuevo(
'http://lovehomeporn.com/media/nuevo/config.php?key=%s' % video_id,
video_id)
info.update({
'display_id': display_id,
'age_limit': 18
})
return info

View File

@@ -17,7 +17,7 @@ class MDRIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?(?:mdr|kika)\.de/(?:.*)/[a-z]+(?P<id>\d+)(?:_.+?)?\.html'
_TESTS = [{
# MDR regularily deletes its videos
# MDR regularly deletes its videos
'url': 'http://www.mdr.de/fakt/video189002.html',
'only_matching': True,
}, {

View File

@@ -100,7 +100,7 @@ class NBCSportsVPlayerIE(InfoExtractor):
class NBCSportsIE(InfoExtractor):
# Does not include https becuase its certificate is invalid
# Does not include https because its certificate is invalid
_VALID_URL = r'http://www\.nbcsports\.com//?(?:[^/]+/)+(?P<id>[0-9a-z-]+)'
_TEST = {

View File

@@ -12,7 +12,10 @@ from ..compat import (
compat_str,
compat_itertools_count,
)
from ..utils import sanitized_Request
from ..utils import (
sanitized_Request,
float_or_none,
)
class NetEaseMusicBaseIE(InfoExtractor):
@@ -32,23 +35,32 @@ class NetEaseMusicBaseIE(InfoExtractor):
result = b64encode(m.digest()).decode('ascii')
return result.replace('/', '_').replace('+', '-')
@classmethod
def extract_formats(cls, info):
def extract_formats(self, info):
formats = []
for song_format in cls._FORMATS:
for song_format in self._FORMATS:
details = info.get(song_format)
if not details:
continue
formats.append({
'url': 'http://m5.music.126.net/%s/%s.%s' %
(cls._encrypt(details['dfsId']), details['dfsId'],
details['extension']),
'ext': details.get('extension'),
'abr': details.get('bitrate', 0) / 1000,
'format_id': song_format,
'filesize': details.get('size'),
'asr': details.get('sr')
})
song_file_path = '/%s/%s.%s' % (
self._encrypt(details['dfsId']), details['dfsId'], details['extension'])
# 203.130.59.9, 124.40.233.182, 115.231.74.139, etc is a reverse proxy-like feature
# from NetEase's CDN provider that can be used if m5.music.126.net does not
# work, especially for users outside of Mainland China
# via: https://github.com/JixunMoe/unblock-163/issues/3#issuecomment-163115880
for host in ('http://m5.music.126.net', 'http://115.231.74.139/m1.music.126.net',
'http://124.40.233.182/m1.music.126.net', 'http://203.130.59.9/m1.music.126.net'):
song_url = host + song_file_path
if self._is_valid_url(song_url, info['id'], 'song'):
formats.append({
'url': song_url,
'ext': details.get('extension'),
'abr': float_or_none(details.get('bitrate'), scale=1000),
'format_id': song_format,
'filesize': details.get('size'),
'asr': details.get('sr')
})
break
return formats
@classmethod

View File

@@ -223,7 +223,7 @@ class NHLVideocenterIE(NHLBaseInfoExtractor):
response = self._download_webpage(request_url, playlist_title)
response = self._fix_json(response)
if not response.strip():
self._downloader.report_warning('Got an empty reponse, trying '
self._downloader.report_warning('Got an empty response, trying '
'adding the "newvideos" parameter')
response = self._download_webpage(request_url + '&newvideos=true',
playlist_title)

View File

@@ -2,6 +2,7 @@
from __future__ import unicode_literals
from .common import InfoExtractor
from ..compat import compat_urlparse
from ..utils import (
int_or_none,
js_to_json,
@@ -34,7 +35,7 @@ class NTVDeIE(InfoExtractor):
webpage = self._download_webpage(url, video_id)
info = self._parse_json(self._search_regex(
r'(?s)ntv.pageInfo.article =\s(\{.*?\});', webpage, 'info'),
r'(?s)ntv\.pageInfo\.article\s*=\s*(\{.*?\});', webpage, 'info'),
video_id, transform_source=js_to_json)
timestamp = int_or_none(info.get('publishedDateAsUnixTimeStamp'))
vdata = self._parse_json(self._search_regex(
@@ -42,18 +43,24 @@ class NTVDeIE(InfoExtractor):
webpage, 'player data'),
video_id, transform_source=js_to_json)
duration = parse_duration(vdata.get('duration'))
formats = [{
'format_id': 'flash',
'url': 'rtmp://fms.n-tv.de/' + vdata['video'],
}, {
'format_id': 'mobile',
'url': 'http://video.n-tv.de' + vdata['videoMp4'],
'tbr': 400, # estimation
}]
m3u8_url = 'http://video.n-tv.de' + vdata['videoM3u8']
formats.extend(self._extract_m3u8_formats(
m3u8_url, video_id, ext='mp4',
entry_protocol='m3u8_native', preference=0))
formats = []
if vdata.get('video'):
formats.append({
'format_id': 'flash',
'url': 'rtmp://fms.n-tv.de/%s' % vdata['video'],
})
if vdata.get('videoMp4'):
formats.append({
'format_id': 'mobile',
'url': compat_urlparse.urljoin('http://video.n-tv.de', vdata['videoMp4']),
'tbr': 400, # estimation
})
if vdata.get('videoM3u8'):
m3u8_url = compat_urlparse.urljoin('http://video.n-tv.de', vdata['videoM3u8'])
formats.extend(self._extract_m3u8_formats(
m3u8_url, video_id, ext='mp4', entry_protocol='m3u8_native',
preference=0, m3u8_id='hls', fatal=False))
self._sort_formats(formats)
return {

View File

@@ -0,0 +1,38 @@
# encoding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
float_or_none,
xpath_text
)
class NuevoBaseIE(InfoExtractor):
def _extract_nuevo(self, config_url, video_id):
config = self._download_xml(
config_url, video_id, transform_source=lambda s: s.strip())
title = xpath_text(config, './title', 'title', fatal=True).strip()
video_id = xpath_text(config, './mediaid', default=video_id)
thumbnail = xpath_text(config, ['./image', './thumb'])
duration = float_or_none(xpath_text(config, './duration'))
formats = []
for element_name, format_id in (('file', 'sd'), ('filehd', 'hd')):
video_url = xpath_text(config, element_name)
if video_url:
formats.append({
'url': video_url,
'format_id': format_id,
})
self._check_formats(formats, video_id)
return {
'id': video_id,
'title': title,
'thumbnail': thumbnail,
'duration': duration,
'formats': formats
}

View File

@@ -21,7 +21,6 @@ class OraTVIE(InfoExtractor):
'ext': 'mp4',
'title': 'Vine & YouTube Stars Zach King & King Bach On Their Viral Videos!',
'description': 'md5:ebbc5b1424dd5dba7be7538148287ac1',
'duration': 1477,
}
}
@@ -30,14 +29,14 @@ class OraTVIE(InfoExtractor):
webpage = self._download_webpage(url, display_id)
video_data = self._search_regex(
r'"current"\s*:\s*({[^}]+?})', webpage, 'current video')
r'"(?:video|current)"\s*:\s*({[^}]+?})', webpage, 'current video')
m3u8_url = self._search_regex(
r'"hls_stream"\s*:\s*"([^"]+)', video_data, 'm3u8 url', None)
r'hls_stream"?\s*:\s*"([^"]+)', video_data, 'm3u8 url', None)
if m3u8_url:
formats = self._extract_m3u8_formats(
m3u8_url, display_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False)
# simular to GameSpotIE
# similar to GameSpotIE
m3u8_path = compat_urlparse.urlparse(m3u8_url).path
QUALITIES_RE = r'((,[a-z]+\d+)+,?)'
available_qualities = self._search_regex(
@@ -62,14 +61,12 @@ class OraTVIE(InfoExtractor):
return {
'id': self._search_regex(
r'"video_id"\s*:\s*(\d+)', video_data, 'video id'),
r'"id"\s*:\s*(\d+)', video_data, 'video id', default=display_id),
'display_id': display_id,
'title': unescapeHTML(self._og_search_title(webpage)),
'description': get_element_by_attribute(
'class', 'video_txt_decription', webpage),
'thumbnail': self._proto_relative_url(self._search_regex(
r'"thumb"\s*:\s*"([^"]+)', video_data, 'thumbnail', None)),
'duration': int(self._search_regex(
r'"duration"\s*:\s*(\d+)', video_data, 'duration')),
'formats': formats,
}

View File

@@ -170,7 +170,21 @@ class ORFOE1IE(InfoExtractor):
class ORFFM4IE(InfoExtractor):
IE_NAME = 'orf:fm4'
IE_DESC = 'radio FM4'
_VALID_URL = r'http://fm4\.orf\.at/7tage/?#(?P<date>[0-9]+)/(?P<show>\w+)'
_VALID_URL = r'http://fm4\.orf\.at/(?:7tage/?#|player/)(?P<date>[0-9]+)/(?P<show>\w+)'
_TEST = {
'url': 'http://fm4.orf.at/player/20160110/IS/',
'md5': '01e736e8f1cef7e13246e880a59ad298',
'info_dict': {
'id': '2016-01-10_2100_tl_54_7DaysSun13_11244',
'ext': 'mp3',
'title': 'Im Sumpf',
'description': 'md5:384c543f866c4e422a55f66a62d669cd',
'duration': 7173,
'timestamp': 1452456073,
'upload_date': '20160110',
},
}
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)

View File

@@ -232,7 +232,7 @@ class PluralsightIE(PluralsightBaseIE):
# { a = author, cn = clip_id, lc = end, m = name }
return {
'id': clip['clipName'],
'id': clip.get('clipName') or clip['name'],
'title': '%s - %s' % (module['title'], clip['title']),
'duration': int_or_none(clip.get('duration')) or parse_duration(clip.get('formattedDuration')),
'creator': author,

View File

@@ -20,7 +20,7 @@ from ..utils import (
class ProSiebenSat1IE(InfoExtractor):
IE_NAME = 'prosiebensat1'
IE_DESC = 'ProSiebenSat.1 Digital'
_VALID_URL = r'https?://(?:www\.)?(?:(?:prosieben|prosiebenmaxx|sixx|sat1|kabeleins|the-voice-of-germany)\.(?:de|at|ch)|ran\.de|fem\.com)/(?P<id>.+)'
_VALID_URL = r'https?://(?:www\.)?(?:(?:prosieben|prosiebenmaxx|sixx|sat1|kabeleins|the-voice-of-germany|7tv)\.(?:de|at|ch)|ran\.de|fem\.com)/(?P<id>.+)'
_TESTS = [
{
@@ -32,7 +32,7 @@ class ProSiebenSat1IE(InfoExtractor):
'url': 'http://www.prosieben.de/tv/circus-halligalli/videos/218-staffel-2-episode-18-jahresrueckblick-ganze-folge',
'info_dict': {
'id': '2104602',
'ext': 'mp4',
'ext': 'flv',
'title': 'Episode 18 - Staffel 2',
'description': 'md5:8733c81b702ea472e069bc48bb658fc1',
'upload_date': '20131231',
@@ -138,14 +138,13 @@ class ProSiebenSat1IE(InfoExtractor):
'url': 'http://www.the-voice-of-germany.de/video/31-andreas-kuemmert-rocket-man-clip',
'info_dict': {
'id': '2572814',
'ext': 'mp4',
'ext': 'flv',
'title': 'Andreas Kümmert: Rocket Man',
'description': 'md5:6ddb02b0781c6adf778afea606652e38',
'upload_date': '20131017',
'duration': 469.88,
},
'params': {
# rtmp download
'skip_download': True,
},
},
@@ -153,13 +152,12 @@ class ProSiebenSat1IE(InfoExtractor):
'url': 'http://www.fem.com/wellness/videos/wellness-video-clip-kurztripps-zum-valentinstag.html',
'info_dict': {
'id': '2156342',
'ext': 'mp4',
'ext': 'flv',
'title': 'Kurztrips zum Valentinstag',
'description': 'Romantischer Kurztrip zum Valentinstag? Wir verraten, was sich hier wirklich lohnt.',
'description': 'Romantischer Kurztrip zum Valentinstag? Nina Heinemann verrät, was sich hier wirklich lohnt.',
'duration': 307.24,
},
'params': {
# rtmp download
'skip_download': True,
},
},
@@ -172,12 +170,26 @@ class ProSiebenSat1IE(InfoExtractor):
},
'playlist_count': 2,
},
{
'url': 'http://www.7tv.de/circus-halligalli/615-best-of-circus-halligalli-ganze-folge',
'info_dict': {
'id': '4187506',
'ext': 'flv',
'title': 'Best of Circus HalliGalli',
'description': 'md5:8849752efd90b9772c9db6fdf87fb9e9',
'upload_date': '20151229',
},
'params': {
'skip_download': True,
},
},
]
_CLIPID_REGEXES = [
r'"clip_id"\s*:\s+"(\d+)"',
r'clipid: "(\d+)"',
r'clip[iI]d=(\d+)',
r'clip[iI]d\s*=\s*["\'](\d+)',
r"'itemImageUrl'\s*:\s*'/dynamic/thumbnails/full/\d+/(\d+)",
]
_TITLE_REGEXES = [
@@ -186,12 +198,16 @@ class ProSiebenSat1IE(InfoExtractor):
r'<!-- start video -->\s*<h1>(.+?)</h1>',
r'<h1 class="att-name">\s*(.+?)</h1>',
r'<header class="module_header">\s*<h2>([^<]+)</h2>\s*</header>',
r'<h2 class="video-title" itemprop="name">\s*(.+?)</h2>',
r'<div[^>]+id="veeseoTitle"[^>]*>(.+?)</div>',
]
_DESCRIPTION_REGEXES = [
r'<p itemprop="description">\s*(.+?)</p>',
r'<div class="videoDecription">\s*<p><strong>Beschreibung</strong>: (.+?)</p>',
r'<div class="g-plusone" data-size="medium"></div>\s*</div>\s*</header>\s*(.+?)\s*<footer>',
r'<p class="att-description">\s*(.+?)\s*</p>',
r'<p class="video-description" itemprop="description">\s*(.+?)</p>',
r'<div[^>]+id="veeseoDescription"[^>]*>(.+?)</div>',
]
_UPLOAD_DATE_REGEXES = [
r'<meta property="og:published_time" content="(.+?)">',

View File

@@ -0,0 +1,44 @@
from __future__ import unicode_literals
from .nuevo import NuevoBaseIE
class RulePornIE(NuevoBaseIE):
_VALID_URL = r'https?://(?:www\.)?ruleporn\.com/(?:[^/?#&]+/)*(?P<id>[^/?#&]+)'
_TEST = {
'url': 'http://ruleporn.com/brunette-nympho-chick-takes-her-boyfriend-in-every-angle/',
'md5': '86861ebc624a1097c7c10eaf06d7d505',
'info_dict': {
'id': '48212',
'display_id': 'brunette-nympho-chick-takes-her-boyfriend-in-every-angle',
'ext': 'mp4',
'title': 'Brunette Nympho Chick Takes Her Boyfriend In Every Angle',
'description': 'md5:6d28be231b981fff1981deaaa03a04d5',
'age_limit': 18,
'duration': 635.1,
}
}
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
video_id = self._search_regex(
r'lovehomeporn\.com/embed/(\d+)', webpage, 'video id')
title = self._search_regex(
r'<h2[^>]+title=(["\'])(?P<url>.+?)\1',
webpage, 'title', group='url')
description = self._html_search_meta('description', webpage)
info = self._extract_nuevo(
'http://lovehomeporn.com/media/nuevo/econfig.php?key=%s&rp=true' % video_id,
video_id)
info.update({
'display_id': display_id,
'title': title,
'description': description,
'age_limit': 18
})
return info

View File

@@ -73,6 +73,9 @@ class ShahidIE(InfoExtractor):
'https://shahid.mbc.net/arContent/getPlayerContent-param-.id-%s.type-%s.html'
% (video_id, api_vars['type']), video_id, 'Downloading player JSON')
if player.get('drm'):
raise ExtractorError('This video is DRM protected.', expected=True)
formats = self._extract_m3u8_formats(player['url'], video_id, 'mp4')
video = self._download_json(

View File

@@ -37,6 +37,14 @@ class SVTBaseIE(InfoExtractor):
})
self._sort_formats(formats)
subtitles = {}
subtitle_references = video_info.get('subtitleReferences')
if isinstance(subtitle_references, list):
for sr in subtitle_references:
subtitle_url = sr.get('url')
if subtitle_url:
subtitles.setdefault('sv', []).append({'url': subtitle_url})
duration = video_info.get('materialLength')
age_limit = 18 if video_info.get('inappropriateForChildren') else 0
@@ -44,6 +52,7 @@ class SVTBaseIE(InfoExtractor):
'id': video_id,
'title': title,
'formats': formats,
'subtitles': subtitles,
'thumbnail': thumbnail,
'duration': duration,
'age_limit': age_limit,
@@ -83,30 +92,23 @@ class SVTIE(SVTBaseIE):
class SVTPlayIE(SVTBaseIE):
IE_DESC = 'SVT Play and Öppet arkiv'
_VALID_URL = r'https?://(?:www\.)?(?P<host>svtplay|oppetarkiv)\.se/video/(?P<id>[0-9]+)'
_TESTS = [{
'url': 'http://www.svtplay.se/video/2609989/sm-veckan/sm-veckan-rally-final-sasong-1-sm-veckan-rally-final',
'md5': 'ade3def0643fa1c40587a422f98edfd9',
_TEST = {
'url': 'http://www.svtplay.se/video/5996901/flygplan-till-haile-selassie/flygplan-till-haile-selassie-2',
'md5': '2b6704fe4a28801e1a098bbf3c5ac611',
'info_dict': {
'id': '2609989',
'ext': 'flv',
'title': 'SM veckan vinter, Örebro - Rally, final',
'duration': 4500,
'id': '5996901',
'ext': 'mp4',
'title': 'Flygplan till Haile Selassie',
'duration': 3527,
'thumbnail': 're:^https?://.*[\.-]jpg$',
'age_limit': 0,
'subtitles': {
'sv': [{
'ext': 'wsrt',
}]
},
},
}, {
'url': 'http://www.oppetarkiv.se/video/1058509/rederiet-sasong-1-avsnitt-1-av-318',
'md5': 'c3101a17ce9634f4c1f9800f0746c187',
'info_dict': {
'id': '1058509',
'ext': 'flv',
'title': 'Farlig kryssning',
'duration': 2566,
'thumbnail': 're:^https?://.*[\.-]jpg$',
'age_limit': 0,
},
'skip': 'Only works from Sweden',
}]
}
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)

View File

@@ -7,7 +7,7 @@ from ..utils import ExtractorError
class TestURLIE(InfoExtractor):
""" Allows adressing of the test cases as test:yout.*be_1 """
""" Allows addressing of the test cases as test:yout.*be_1 """
IE_DESC = False # Do not list
_VALID_URL = r'test(?:url)?:(?P<id>(?P<extractor>.+?)(?:_(?P<num>[0-9]+))?)$'

View File

@@ -85,7 +85,7 @@ class ThePlatformBaseIE(InfoExtractor):
class ThePlatformIE(ThePlatformBaseIE):
_VALID_URL = r'''(?x)
(?:https?://(?:link|player)\.theplatform\.com/[sp]/(?P<provider_id>[^/]+)/
(?:(?P<media>(?:[^/]+/)+select/media/)|(?P<config>(?:[^/\?]+/(?:swf|config)|onsite)/select/))?
(?:(?P<media>(?:(?:[^/]+/)+select/)?media/)|(?P<config>(?:[^/\?]+/(?:swf|config)|onsite)/select/))?
|theplatform:)(?P<id>[^/\?&]+)'''
_TESTS = [{

View File

@@ -0,0 +1,36 @@
# encoding: utf-8
from __future__ import unicode_literals
import re
from .nuevo import NuevoBaseIE
class TrollvidsIE(NuevoBaseIE):
_VALID_URL = r'http://(?:www\.)?trollvids\.com/video/(?P<id>\d+)/(?P<display_id>[^/?#&]+)'
IE_NAME = 'trollvids'
_TEST = {
'url': 'http://trollvids.com/video/2349002/%E3%80%90MMD-R-18%E3%80%91%E3%82%AC%E3%83%BC%E3%83%AB%E3%83%95%E3%83%AC%E3%83%B3%E3%83%89-carrymeoff',
'md5': '1d53866b2c514b23ed69e4352fdc9839',
'info_dict': {
'id': '2349002',
'ext': 'mp4',
'title': '【MMD R-18】ガールフレンド carry_me_off',
'age_limit': 18,
'duration': 216.78,
},
}
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
display_id = mobj.group('display_id')
info = self._extract_nuevo(
'http://trollvids.com/nuevo/player/config.php?v=%s' % video_id,
video_id)
info.update({
'display_id': display_id,
'age_limit': 18
})
return info

View File

@@ -1,11 +1,10 @@
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import xpath_text
from .nuevo import NuevoBaseIE
class TruTubeIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?trutube\.tv/(?:video/|nuevo/player/embed\.php\?v=)(?P<id>[0-9]+)'
class TruTubeIE(NuevoBaseIE):
_VALID_URL = r'https?://(?:www\.)?trutube\.tv/(?:video/|nuevo/player/embed\.php\?v=)(?P<id>\d+)'
_TESTS = [{
'url': 'http://trutube.tv/video/14880/Ramses-II-Proven-To-Be-A-Red-Headed-Caucasoid-',
'md5': 'c5b6e301b0a2040b074746cbeaa26ca1',
@@ -22,19 +21,6 @@ class TruTubeIE(InfoExtractor):
def _real_extract(self, url):
video_id = self._match_id(url)
config = self._download_xml(
return self._extract_nuevo(
'https://trutube.tv/nuevo/player/config.php?v=%s' % video_id,
video_id, transform_source=lambda s: s.strip())
# filehd is always 404
video_url = xpath_text(config, './file', 'video URL', fatal=True)
title = xpath_text(config, './title', 'title').strip()
thumbnail = xpath_text(config, './image', ' thumbnail')
return {
'id': video_id,
'url': video_url,
'title': title,
'thumbnail': thumbnail,
}
video_id)

View File

@@ -1,10 +1,9 @@
from __future__ import unicode_literals
import json
import re
from .common import InfoExtractor
from ..compat import compat_urllib_parse_urlparse
from ..compat import compat_str
from ..utils import (
int_or_none,
sanitized_Request,
@@ -15,25 +14,23 @@ from ..aes import aes_decrypt_text
class Tube8IE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?tube8\.com/(?:[^/]+/)+(?P<display_id>[^/]+)/(?P<id>\d+)'
_TESTS = [
{
'url': 'http://www.tube8.com/teen/kasia-music-video/229795/',
'md5': '44bf12b98313827dd52d35b8706a4ea0',
'info_dict': {
'id': '229795',
'display_id': 'kasia-music-video',
'ext': 'mp4',
'description': 'hot teen Kasia grinding',
'uploader': 'unknown',
'title': 'Kasia music video',
'age_limit': 18,
}
},
{
'url': 'http://www.tube8.com/shemale/teen/blonde-cd-gets-kidnapped-by-two-blacks-and-punished-for-being-a-slutty-girl/19569151/',
'only_matching': True,
},
]
_TESTS = [{
'url': 'http://www.tube8.com/teen/kasia-music-video/229795/',
'md5': '65e20c48e6abff62ed0c3965fff13a39',
'info_dict': {
'id': '229795',
'display_id': 'kasia-music-video',
'ext': 'mp4',
'description': 'hot teen Kasia grinding',
'uploader': 'unknown',
'title': 'Kasia music video',
'age_limit': 18,
'duration': 230,
}
}, {
'url': 'http://www.tube8.com/shemale/teen/blonde-cd-gets-kidnapped-by-two-blacks-and-punished-for-being-a-slutty-girl/19569151/',
'only_matching': True,
}]
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
@@ -44,14 +41,28 @@ class Tube8IE(InfoExtractor):
req.add_header('Cookie', 'age_verified=1')
webpage = self._download_webpage(req, display_id)
flashvars = json.loads(self._html_search_regex(
r'flashvars\s*=\s*({.+?});\r?\n', webpage, 'flashvars'))
flashvars = self._parse_json(
self._search_regex(
r'flashvars\s*=\s*({.+?});\r?\n', webpage, 'flashvars'),
video_id)
video_url = flashvars['video_url']
if flashvars.get('encrypted') is True:
video_url = aes_decrypt_text(video_url, flashvars['video_title'], 32).decode('utf-8')
path = compat_urllib_parse_urlparse(video_url).path
format_id = '-'.join(path.split('/')[4].split('_')[:2])
formats = []
for key, video_url in flashvars.items():
if not isinstance(video_url, compat_str) or not video_url.startswith('http'):
continue
height = self._search_regex(
r'quality_(\d+)[pP]', key, 'height', default=None)
if not height:
continue
if flashvars.get('encrypted') is True:
video_url = aes_decrypt_text(
video_url, flashvars['video_title'], 32).decode('utf-8')
formats.append({
'url': video_url,
'format_id': '%sp' % height,
'height': int(height),
})
self._sort_formats(formats)
thumbnail = flashvars.get('image_url')
@@ -62,32 +73,31 @@ class Tube8IE(InfoExtractor):
uploader = self._html_search_regex(
r'<span class="username">\s*(.+?)\s*<',
webpage, 'uploader', fatal=False)
duration = int_or_none(flashvars.get('video_duration'))
like_count = int_or_none(self._html_search_regex(
like_count = int_or_none(self._search_regex(
r'rupVar\s*=\s*"(\d+)"', webpage, 'like count', fatal=False))
dislike_count = int_or_none(self._html_search_regex(
dislike_count = int_or_none(self._search_regex(
r'rdownVar\s*=\s*"(\d+)"', webpage, 'dislike count', fatal=False))
view_count = self._html_search_regex(
r'<strong>Views: </strong>([\d,\.]+)\s*</li>', webpage, 'view count', fatal=False)
if view_count:
view_count = str_to_int(view_count)
comment_count = self._html_search_regex(
r'<span id="allCommentsCount">(\d+)</span>', webpage, 'comment count', fatal=False)
if comment_count:
comment_count = str_to_int(comment_count)
view_count = str_to_int(self._search_regex(
r'<strong>Views: </strong>([\d,\.]+)\s*</li>',
webpage, 'view count', fatal=False))
comment_count = str_to_int(self._search_regex(
r'<span id="allCommentsCount">(\d+)</span>',
webpage, 'comment count', fatal=False))
return {
'id': video_id,
'display_id': display_id,
'url': video_url,
'title': title,
'description': description,
'thumbnail': thumbnail,
'uploader': uploader,
'format_id': format_id,
'duration': duration,
'view_count': view_count,
'like_count': like_count,
'dislike_count': dislike_count,
'comment_count': comment_count,
'age_limit': 18,
'formats': formats,
}

View File

@@ -4,10 +4,16 @@ from __future__ import unicode_literals
from .common import InfoExtractor
from ..compat import compat_str
from ..utils import (
int_or_none,
float_or_none,
unescapeHTML,
)
class TudouIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?tudou\.com/(?:listplay|programs(?:/view)?|albumplay)/([^/]+/)*(?P<id>[^/?#]+?)(?:\.html)?/?(?:$|[?#])'
IE_NAME = 'tudou'
_VALID_URL = r'https?://(?:www\.)?tudou\.com/(?:(?:programs|wlplay)/view|(?:listplay|albumplay)/[\w-]{11})/(?P<id>[\w-]{11})'
_TESTS = [{
'url': 'http://www.tudou.com/listplay/zzdE77v6Mmo/2xN2duXMxmw.html',
'md5': '140a49ed444bd22f93330985d8475fcb',
@@ -16,6 +22,11 @@ class TudouIE(InfoExtractor):
'ext': 'f4v',
'title': '卡马乔国足开大脚长传冲吊集锦',
'thumbnail': 're:^https?://.*\.jpg$',
'timestamp': 1372113489000,
'description': '卡马乔卡家军,开大脚先进战术不完全集锦!',
'duration': 289.04,
'view_count': int,
'filesize': int,
}
}, {
'url': 'http://www.tudou.com/programs/view/ajX3gyhL0pc/',
@@ -24,10 +35,12 @@ class TudouIE(InfoExtractor):
'ext': 'f4v',
'title': 'La Sylphide-Bolshoi-Ekaterina Krysanova & Vyacheslav Lopatin 2012',
'thumbnail': 're:^https?://.*\.jpg$',
'timestamp': 1349207518000,
'description': 'md5:294612423894260f2dcd5c6c04fe248b',
'duration': 5478.33,
'view_count': int,
'filesize': int,
}
}, {
'url': 'http://www.tudou.com/albumplay/cJAHGih4yYg.html',
'only_matching': True,
}]
_PLAYER_URL = 'http://js.tudouui.com/bin/lingtong/PortalPlayer_177.swf'
@@ -42,24 +55,20 @@ class TudouIE(InfoExtractor):
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
item_data = self._download_json(
'http://www.tudou.com/tvp/getItemInfo.action?ic=%s' % video_id, video_id)
youku_vcode = self._search_regex(
r'vcode\s*:\s*[\'"]([^\'"]*)[\'"]', webpage, 'youku vcode', default=None)
youku_vcode = item_data.get('vcode')
if youku_vcode:
return self.url_result('youku:' + youku_vcode, ie='Youku')
title = self._search_regex(
r',kw\s*:\s*[\'"]([^\'"]+)[\'"]', webpage, 'title')
thumbnail_url = self._search_regex(
r',pic\s*:\s*[\'"]([^\'"]+)[\'"]', webpage, 'thumbnail URL', fatal=False)
title = unescapeHTML(item_data['kw'])
description = item_data.get('desc')
thumbnail_url = item_data.get('pic')
view_count = int_or_none(item_data.get('playTimes'))
timestamp = int_or_none(item_data.get('pt'))
player_url = self._search_regex(
r'playerUrl\s*:\s*[\'"]([^\'"]+\.swf)[\'"]',
webpage, 'player URL', default=self._PLAYER_URL)
segments = self._parse_json(self._search_regex(
r'segs: \'([^\']+)\'', webpage, 'segments'), video_id)
segments = self._parse_json(item_data['itemSegs'], video_id)
# It looks like the keys are the arguments that have to be passed as
# the hd field in the request url, we pick the higher
# Also, filter non-number qualities (see issue #3643).
@@ -80,8 +89,13 @@ class TudouIE(InfoExtractor):
'ext': ext,
'title': title,
'thumbnail': thumbnail_url,
'description': description,
'view_count': view_count,
'timestamp': timestamp,
'duration': float_or_none(part.get('seconds'), 1000),
'filesize': int_or_none(part.get('size')),
'http_headers': {
'Referer': player_url,
'Referer': self._PLAYER_URL,
},
}
result.append(part_info)
@@ -92,3 +106,47 @@ class TudouIE(InfoExtractor):
'id': video_id,
'title': title,
}
class TudouPlaylistIE(InfoExtractor):
IE_NAME = 'tudou:playlist'
_VALID_URL = r'https?://(?:www\.)?tudou\.com/listplay/(?P<id>[\w-]{11})\.html'
_TESTS = [{
'url': 'http://www.tudou.com/listplay/zzdE77v6Mmo.html',
'info_dict': {
'id': 'zzdE77v6Mmo',
},
'playlist_mincount': 209,
}]
def _real_extract(self, url):
playlist_id = self._match_id(url)
playlist_data = self._download_json(
'http://www.tudou.com/tvp/plist.action?lcode=%s' % playlist_id, playlist_id)
entries = [self.url_result(
'http://www.tudou.com/programs/view/%s' % item['icode'],
'Tudou', item['icode'],
item['kw']) for item in playlist_data['items']]
return self.playlist_result(entries, playlist_id)
class TudouAlbumIE(InfoExtractor):
IE_NAME = 'tudou:album'
_VALID_URL = r'https?://(?:www\.)?tudou\.com/album(?:cover|play)/(?P<id>[\w-]{11})'
_TESTS = [{
'url': 'http://www.tudou.com/albumplay/v5qckFJvNJg.html',
'info_dict': {
'id': 'v5qckFJvNJg',
},
'playlist_mincount': 45,
}]
def _real_extract(self, url):
album_id = self._match_id(url)
album_data = self._download_json(
'http://www.tudou.com/tvp/alist.action?acode=%s' % album_id, album_id)
entries = [self.url_result(
'http://www.tudou.com/programs/view/%s' % item['icode'],
'Tudou', item['icode'],
item['kw']) for item in album_data['items']]
return self.playlist_result(entries, album_id)

View File

@@ -67,7 +67,7 @@ class TV4IE(InfoExtractor):
info = self._download_json(
'http://www.tv4play.se/player/assets/%s.json' % video_id, video_id, 'Downloading video info JSON')
# If is_geo_restricted is true, it doesn't neceserally mean we can't download it
# If is_geo_restricted is true, it doesn't necessarily mean we can't download it
if info['is_geo_restricted']:
self.report_warning('This content might not be available in your country due to licensing restrictions.')
if info['requires_subscription']:

View File

@@ -38,7 +38,7 @@ class UnistraIE(InfoExtractor):
webpage = self._download_webpage(url, video_id)
files = set(re.findall(r'file\s*:\s*"([^"]+)"', webpage))
files = set(re.findall(r'file\s*:\s*"(/[^"]+)"', webpage))
quality = qualities(['SD', 'HD'])
formats = []

View File

@@ -47,7 +47,7 @@ class UstreamIE(InfoExtractor):
m = re.match(self._VALID_URL, url)
video_id = m.group('id')
# some sites use this embed format (see: http://github.com/rg3/youtube-dl/issues/2990)
# some sites use this embed format (see: https://github.com/rg3/youtube-dl/issues/2990)
if m.group('type') == 'embed/recorded':
video_id = m.group('id')
desktop_url = 'http://www.ustream.tv/recorded/' + video_id

View File

@@ -8,6 +8,7 @@ from ..utils import sanitized_Request
class VideoMegaIE(InfoExtractor):
_WORKING = False
_VALID_URL = r'(?:videomega:|https?://(?:www\.)?videomega\.tv/(?:(?:view|iframe|cdn)\.php)?\?ref=)(?P<id>[A-Za-z0-9]+)'
_TESTS = [{
'url': 'http://videomega.tv/cdn.php?ref=AOSQBJYKIDDIKYJBQSOA',

View File

@@ -170,7 +170,7 @@ class VideomoreVideoIE(InfoExtractor):
'skip_download': True,
},
}, {
# season single serie with og:video:iframe
# season single series with og:video:iframe
'url': 'http://videomore.ru/poslednii_ment/1_sezon/14_seriya',
'only_matching': True,
}, {

View File

@@ -11,6 +11,7 @@ from ..utils import (
class VideoTtIE(InfoExtractor):
_WORKING = False
ID_NAME = 'video.tt'
IE_DESC = 'video.tt - Your True Tube'
_VALID_URL = r'http://(?:www\.)?video\.tt/(?:(?:video|embed)/|watch_video\.php\?v=)(?P<id>[\da-zA-Z]{9})'

View File

@@ -155,10 +155,10 @@ class ViewsterIE(InfoExtractor):
self._sort_formats(formats)
synopsis = info.get('Synopsis', {})
synopsis = info.get('Synopsis') or {}
# Prefer title outside synopsis since it's less messy
title = (info.get('Title') or synopsis['Title']).strip()
description = synopsis.get('Detailed') or info.get('Synopsis', {}).get('Short')
description = synopsis.get('Detailed') or (info.get('Synopsis') or {}).get('Short')
duration = int_or_none(info.get('Duration'))
timestamp = parse_iso8601(info.get('ReleaseDate'))

View File

@@ -430,7 +430,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
if download_url and not source_file.get('is_cold') and not source_file.get('is_defrosting'):
source_name = source_file.get('public_name', 'Original')
if self._is_valid_url(download_url, video_id, '%s video' % source_name):
ext = source_file.get('extension', determine_ext(download_url)).lower(),
ext = source_file.get('extension', determine_ext(download_url)).lower()
formats.append({
'url': download_url,
'ext': ext,

View File

@@ -5,12 +5,13 @@ from .common import InfoExtractor
from ..compat import compat_urllib_parse
from ..utils import (
ExtractorError,
NO_DEFAULT,
sanitized_Request,
)
class VodlockerIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?vodlocker\.com/(?:embed-)?(?P<id>[0-9a-zA-Z]+)(?:\..*?)?'
_VALID_URL = r'https?://(?:www\.)?vodlocker\.(?:com|city)/(?:embed-)?(?P<id>[0-9a-zA-Z]+)(?:\..*?)?'
_TESTS = [{
'url': 'http://vodlocker.com/e8wvyzz4sl42',
@@ -43,16 +44,31 @@ class VodlockerIE(InfoExtractor):
webpage = self._download_webpage(
req, video_id, 'Downloading video page')
def extract_file_url(html, default=NO_DEFAULT):
return self._search_regex(
r'file:\s*"(http[^\"]+)",', html, 'file url', default=default)
video_url = extract_file_url(webpage, default=None)
if not video_url:
embed_url = self._search_regex(
r'<iframe[^>]+src=(["\'])(?P<url>(?:https?://)?vodlocker\.(?:com|city)/embed-.+?)\1',
webpage, 'embed url', group='url')
embed_webpage = self._download_webpage(
embed_url, video_id, 'Downloading embed webpage')
video_url = extract_file_url(embed_webpage)
thumbnail_webpage = embed_webpage
else:
thumbnail_webpage = webpage
title = self._search_regex(
r'id="file_title".*?>\s*(.*?)\s*<(?:br|span)', webpage, 'title')
thumbnail = self._search_regex(
r'image:\s*"(http[^\"]+)",', webpage, 'thumbnail')
url = self._search_regex(
r'file:\s*"(http[^\"]+)",', webpage, 'file url')
r'image:\s*"(http[^\"]+)",', thumbnail_webpage, 'thumbnail', fatal=False)
formats = [{
'format_id': 'sd',
'url': url,
'url': video_url,
}]
return {

View File

@@ -0,0 +1,52 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
class WeiqiTVIE(InfoExtractor):
IE_DESC = 'WQTV'
_VALID_URL = r'http://www\.weiqitv\.com/index/video_play\?videoId=(?P<id>[A-Za-z0-9]+)'
_TESTS = [{
'url': 'http://www.weiqitv.com/index/video_play?videoId=53c744f09874f0e76a8b46f3',
'md5': '26450599afd64c513bc77030ad15db44',
'info_dict': {
'id': '53c744f09874f0e76a8b46f3',
'ext': 'mp4',
'title': '2013年度盘点',
},
}, {
'url': 'http://www.weiqitv.com/index/video_play?videoId=567379a2d4c36cca518b4569',
'info_dict': {
'id': '567379a2d4c36cca518b4569',
'ext': 'mp4',
'title': '民国围棋史',
},
}, {
'url': 'http://www.weiqitv.com/index/video_play?videoId=5430220a9874f088658b4567',
'info_dict': {
'id': '5430220a9874f088658b4567',
'ext': 'mp4',
'title': '二路托过的手段和运用',
},
}]
def _real_extract(self, url):
media_id = self._match_id(url)
page = self._download_webpage(url, media_id)
info_json_str = self._search_regex(
'var\s+video\s*=\s*(.+});', page, 'info json str')
info_json = self._parse_json(info_json_str, media_id)
letvcloud_url = self._search_regex(
'var\s+letvurl\s*=\s*"([^"]+)', page, 'letvcloud url')
return {
'_type': 'url_transparent',
'ie_key': 'LetvCloud',
'url': letvcloud_url,
'title': info_json['name'],
'id': media_id,
}

View File

@@ -6,7 +6,6 @@ from .common import InfoExtractor
from ..utils import (
float_or_none,
int_or_none,
str_to_int,
unified_strdate,
)

View File

@@ -1,10 +1,12 @@
from __future__ import unicode_literals
import itertools
import re
from .common import InfoExtractor
from ..compat import compat_urllib_parse_unquote
from ..utils import (
int_or_none,
parse_duration,
sanitized_Request,
str_to_int,
@@ -12,7 +14,7 @@ from ..utils import (
class XTubeIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?(?P<url>xtube\.com/watch\.php\?v=(?P<id>[^/?&#]+))'
_VALID_URL = r'(?:xtube:|https?://(?:www\.)?xtube\.com/watch\.php\?.*\bv=)(?P<id>[^/?&#]+)'
_TEST = {
'url': 'http://www.xtube.com/watch.php?v=kVTUy_G222_',
'md5': '092fbdd3cbe292c920ef6fc6a8a9cdab',
@@ -30,7 +32,7 @@ class XTubeIE(InfoExtractor):
def _real_extract(self, url):
video_id = self._match_id(url)
req = sanitized_Request(url)
req = sanitized_Request('http://www.xtube.com/watch.php?v=%s' % video_id)
req.add_header('Cookie', 'age_verified=1')
webpage = self._download_webpage(req, video_id)
@@ -88,45 +90,43 @@ class XTubeIE(InfoExtractor):
class XTubeUserIE(InfoExtractor):
IE_DESC = 'XTube user profile'
_VALID_URL = r'https?://(?:www\.)?xtube\.com/community/profile\.php\?(.*?)user=(?P<username>[^&#]+)(?:$|[&#])'
_VALID_URL = r'https?://(?:www\.)?xtube\.com/profile/(?P<id>[^/]+-\d+)'
_TEST = {
'url': 'http://www.xtube.com/community/profile.php?user=greenshowers',
'url': 'http://www.xtube.com/profile/greenshowers-4056496',
'info_dict': {
'id': 'greenshowers',
'id': 'greenshowers-4056496',
'age_limit': 18,
},
'playlist_mincount': 155,
}
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
username = mobj.group('username')
user_id = self._match_id(url)
profile_page = self._download_webpage(
url, username, note='Retrieving profile page')
entries = []
for pagenum in itertools.count(1):
request = sanitized_Request(
'http://www.xtube.com/profile/%s/videos/%d' % (user_id, pagenum),
headers={
'Cookie': 'popunder=4',
'X-Requested-With': 'XMLHttpRequest',
'Referer': url,
})
video_count = int(self._search_regex(
r'<strong>%s\'s Videos \(([0-9]+)\)</strong>' % username, profile_page,
'video count'))
page = self._download_json(
request, user_id, 'Downloading videos JSON page %d' % pagenum)
PAGE_SIZE = 25
urls = []
page_count = (video_count + PAGE_SIZE + 1) // PAGE_SIZE
for n in range(1, page_count + 1):
lpage_url = 'http://www.xtube.com/user_videos.php?page=%d&u=%s' % (n, username)
lpage = self._download_webpage(
lpage_url, username,
note='Downloading page %d/%d' % (n, page_count))
urls.extend(
re.findall(r'addthis:url="([^"]+)"', lpage))
html = page.get('html')
if not html:
break
return {
'_type': 'playlist',
'id': username,
'age_limit': 18,
'entries': [{
'_type': 'url',
'url': eurl,
'ie_key': 'XTube',
} for eurl in urls]
}
for _, video_id in re.findall(r'data-plid=(["\'])(.+?)\1', html):
entries.append(self.url_result('xtube:%s' % video_id, XTubeIE.ie_key()))
page_count = int_or_none(page.get('pageCount'))
if not page_count or pagenum == page_count:
break
playlist = self.playlist_result(entries, user_id)
playlist['age_limit'] = 18
return playlist

View File

@@ -221,6 +221,8 @@ class YahooIE(InfoExtractor):
r'root\.App\.Cache\.context\.videoCache\.curVideo = \{"([^"]+)"',
r'"first_videoid"\s*:\s*"([^"]+)"',
r'%s[^}]*"ccm_id"\s*:\s*"([^"]+)"' % re.escape(page_id),
r'<article[^>]data-uuid=["\']([^"\']+)',
r'yahoo://article/view\?.*\buuid=([^&"\']+)',
]
video_id = self._search_regex(
CONTENT_ID_REGEXES, webpage, 'content ID')

View File

@@ -613,7 +613,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
},
'params': {
'skip_download': 'requires avconv',
}
},
'skip': 'This live event has ended.',
},
# Extraction from multiple DASH manifests (https://github.com/rg3/youtube-dl/pull/6097)
{
@@ -706,6 +707,9 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
},
{
# Title with JS-like syntax "};" (see https://github.com/rg3/youtube-dl/issues/7468)
# Also tests cut-off URL expansion in video description (see
# https://github.com/rg3/youtube-dl/issues/1892,
# https://github.com/rg3/youtube-dl/issues/8164)
'url': 'https://www.youtube.com/watch?v=lsguqyKfVQg',
'info_dict': {
'id': 'lsguqyKfVQg',
@@ -960,6 +964,9 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
try:
args = player_config['args']
caption_url = args['ttsurl']
if not caption_url:
self._downloader.report_warning(err_msg)
return {}
timestamp = args['timestamp']
# We get the available subtitles
list_params = compat_urllib_parse.urlencode({
@@ -1237,7 +1244,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
(?:[a-zA-Z-]+="[^"]+"\s+)*?
(?:title|href)="([^"]+)"\s+
(?:[a-zA-Z-]+="[^"]+"\s+)*?
class="(?:yt-uix-redirect-link|yt-uix-sessionlink[^"]*)".*?>
class="(?:yt-uix-redirect-link|yt-uix-sessionlink[^"]*)"[^>]*>
[^<]+\.{3}\s*
</a>
''', r'\1', video_description)
@@ -1487,7 +1494,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if codecs:
codecs = codecs.split(',')
if len(codecs) == 2:
acodec, vcodec = codecs[0], codecs[1]
acodec, vcodec = codecs[1], codecs[0]
else:
acodec, vcodec = (codecs[0], 'none') if kind == 'audio' else ('none', codecs[0])
dct.update({
@@ -1505,6 +1512,11 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
for a_format in formats:
a_format.setdefault('http_headers', {})['Youtubedl-no-compression'] = 'True'
else:
unavailable_message = self._html_search_regex(
r'(?s)<h1[^>]+id="unavailable-message"[^>]*>(.+?)</h1>',
video_webpage, 'unavailable message', default=None)
if unavailable_message:
raise ExtractorError(unavailable_message, expected=True)
raise ExtractorError('no conn, hlsvp or url_encoded_fmt_stream_map information found in video info')
# Look for the DASH manifest

View File

@@ -0,0 +1,94 @@
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..utils import (
determine_ext,
str_to_int,
)
class ZippCastIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?zippcast\.com/(?:video/|videoview\.php\?.*\bvplay=)(?P<id>[0-9a-zA-Z]+)'
_TESTS = [{
# m3u8, hq direct link
'url': 'http://www.zippcast.com/video/c9cfd5c7e44dbc29c81',
'md5': '5ea0263b5606866c4d6cda0fc5e8c6b6',
'info_dict': {
'id': 'c9cfd5c7e44dbc29c81',
'ext': 'mp4',
'title': '[Vinesauce] Vinny - Digital Space Traveler',
'description': 'Muted on youtube, but now uploaded in it\'s original form.',
'thumbnail': 're:^https?://.*\.jpg$',
'uploader': 'vinesauce',
'view_count': int,
'categories': ['Entertainment'],
'tags': list,
},
}, {
# f4m, lq ipod direct link
'url': 'http://www.zippcast.com/video/b79c0a233e9c6581775',
'only_matching': True,
}, {
'url': 'http://www.zippcast.com/videoview.php?vplay=c9cfd5c7e44dbc29c81&auto=no',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(
'http://www.zippcast.com/video/%s' % video_id, video_id)
formats = []
video_url = self._search_regex(
r'<source[^>]+src=(["\'])(?P<url>.+?)\1', webpage,
'video url', default=None, group='url')
if video_url:
formats.append({
'url': video_url,
'format_id': 'http',
'preference': 0, # direct link is almost always of worse quality
})
src_url = self._search_regex(
r'src\s*:\s*(?:escape\()?(["\'])(?P<url>http://.+?)\1',
webpage, 'src', default=None, group='url')
ext = determine_ext(src_url)
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
src_url, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
elif ext == 'f4m':
formats.extend(self._extract_f4m_formats(
src_url, video_id, f4m_id='hds', fatal=False))
self._sort_formats(formats)
title = self._og_search_title(webpage)
description = self._og_search_description(webpage) or self._html_search_meta(
'description', webpage)
uploader = self._search_regex(
r'<a[^>]+href="https?://[^/]+/profile/[^>]+>([^<]+)</a>',
webpage, 'uploader', fatal=False)
thumbnail = self._og_search_thumbnail(webpage)
view_count = str_to_int(self._search_regex(
r'>([\d,.]+) views!', webpage, 'view count', fatal=False))
categories = re.findall(
r'<a[^>]+href="https?://[^/]+/categories/[^"]+">([^<]+),?<',
webpage)
tags = re.findall(
r'<a[^>]+href="https?://[^/]+/search/tags/[^"]+">([^<]+),?<',
webpage)
return {
'id': video_id,
'title': title,
'description': description,
'thumbnail': thumbnail,
'uploader': uploader,
'view_count': view_count,
'categories': categories,
'tags': tags,
'formats': formats,
}

View File

@@ -380,7 +380,7 @@ def parseOpts(overrideArguments=None):
'--sub-lang', '--sub-langs', '--srt-lang',
action='callback', dest='subtitleslangs', metavar='LANGS', type='str',
default=[], callback=_comma_separated_values_options_callback,
help='Languages of the subtitles to download (optional) separated by commas, use IETF language tags like \'en,pt\'')
help='Languages of the subtitles to download (optional) separated by commas, use --list-subs for available language tags')
downloader = optparse.OptionGroup(parser, 'Download Options')
downloader.add_option(

View File

@@ -689,7 +689,7 @@ class SWFInterpreter(object):
elif mname in _builtin_classes:
res = _builtin_classes[mname]
else:
# Assume unitialized
# Assume uninitialized
# TODO warn here
res = undefined
stack.append(res)

View File

@@ -15,33 +15,17 @@ from .version import __version__
def rsa_verify(message, signature, key):
from struct import pack
from hashlib import sha256
assert isinstance(message, bytes)
block_size = 0
n = key[0]
while n:
block_size += 1
n >>= 8
signature = pow(int(signature, 16), key[1], key[0])
raw_bytes = []
while signature:
raw_bytes.insert(0, pack("B", signature & 0xFF))
signature >>= 8
signature = (block_size - len(raw_bytes)) * b'\x00' + b''.join(raw_bytes)
if signature[0:2] != b'\x00\x01':
byte_size = (len(bin(key[0])) - 2 + 8 - 1) // 8
signature = ('%x' % pow(int(signature, 16), key[1], key[0])).encode()
signature = (byte_size * 2 - len(signature)) * b'0' + signature
asn1 = b'3031300d060960864801650304020105000420'
asn1 += sha256(message).hexdigest().encode()
if byte_size < len(asn1) // 2 + 11:
return False
signature = signature[2:]
if b'\x00' not in signature:
return False
signature = signature[signature.index(b'\x00') + 1:]
if not signature.startswith(b'\x30\x31\x30\x0D\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20'):
return False
signature = signature[19:]
if signature != sha256(message).digest():
return False
return True
expected = b'0001' + (byte_size - len(asn1) // 2 - 3) * b'ff' + b'00' + asn1
return expected == signature
def update_self(to_screen, verbose, opener):

View File

@@ -984,7 +984,7 @@ def date_from_str(date_str):
if sign == '-':
time = -time
unit = match.group('unit')
# A bad aproximation?
# A bad approximation?
if unit == 'month':
unit = 'day'
time *= 30
@@ -1307,7 +1307,7 @@ def parse_filesize(s):
if s is None:
return None
# The lower-case forms are of course incorrect and inofficial,
# The lower-case forms are of course incorrect and unofficial,
# but we support those too
_UNIT_TABLE = {
'B': 1,

View File

@@ -1,3 +1,3 @@
from __future__ import unicode_literals
__version__ = '2016.01.09'
__version__ = '2016.01.23'