Compare commits

...

238 Commits

Author SHA1 Message Date
Philipp Hagemeister
84437adfa3 release 2014.11.20.1 2014-11-20 12:20:57 +01:00
Philipp Hagemeister
732ea2f09b [utils] Improve update on error message somewhat
We still may want to implement a bulletproof check for the current version, and a better place to add this message so that it works for all kind of other errors too.
2014-11-20 12:14:30 +01:00
Philipp Hagemeister
aff2f4f4f5 [arte] Clean up format sorting mess
We now use our standard sorting facilities. As a side effect, it's finally possible to download German videos from French URLs and vice versa.
2014-11-20 12:06:35 +01:00
Philipp Hagemeister
3b9f631c41 release 2014.11.20 2014-11-20 08:55:56 +01:00
Jaime Marquínez Ferrándiz
3ba098a6a5 Merge pull request #4247 from ivan/info-json
Fix #4246 and #4244 .info.json bugs
2014-11-20 08:16:42 +01:00
Ivan Kozik
1394646a0a Fix "ERROR: Cannot write metadata to JSON file" on Windows
Fixes #4246
2014-11-20 06:26:34 +00:00
Ivan Kozik
61ee5aeb73 Fix UnicodeEncodeError with --write-info-json on Python 2.7 + Windows
Fixes #4244
2014-11-20 06:26:34 +00:00
Philipp Hagemeister
07e378fa18 [compat] correct OptionGroup invocation for Python 3 (fixes #4243) 2014-11-20 07:21:12 +01:00
Philipp Hagemeister
e07e931375 Work around 2.7.0 deficencies (Fixes #4223) 2014-11-19 18:21:58 +01:00
Naglis Jonaitis
480b7c32a9 [rtlxl] Fix format order 2014-11-19 01:21:02 +02:00
Jaime Marquínez Ferrándiz
f56875f271 [test/test_compat] Restore the old value of the HOME environment variable
If the test was run before the YoutubeIE tests (for example by running
"nosetests -v test/test_compat.py test/test_download.py -m 'Youtube_1|compat_expand'"),
it wrote the signatures cache to the 'C:\Documents and Settings\тест\Application Data' folder.
It failed due to a problem in the cache code and the write_json_file function (fixed in f03e33b89a622af13fa5275c46b63aaa4814c499)
2014-11-19 00:02:24 +01:00
Jaime Marquínez Ferrándiz
92120217eb [cache] Fix writing to paths with unicode characters
* Use "compat_getenv"
* "write_json_file" now expects the filename to be a string
2014-11-19 00:02:24 +01:00
Naglis Jonaitis
37eddd3143 [rtlxl] Use m3u8 streams instead of f4m (#4115, #4118) 2014-11-19 00:26:44 +02:00
Philipp Hagemeister
0857baade3 [youtube] Add webm audio formats (Fixes #4229) 2014-11-18 11:08:37 +01:00
Philipp Hagemeister
23ad44b57b [youtube] Better error message for DASH manifest 2014-11-17 20:12:52 +01:00
Philipp Hagemeister
f48d3e9bbc [swfinterp] Improve undefined representation 2014-11-17 08:02:48 +01:00
Philipp Hagemeister
fbf94a7815 [swfinterp] Add more builtins and improve static variables 2014-11-17 07:54:06 +01:00
Philipp Hagemeister
1921b24551 [swfinterp] Add support for more complicated constants 2014-11-17 07:31:22 +01:00
Philipp Hagemeister
28e614de5c [utils] Remove stray u' 2014-11-17 07:16:12 +01:00
Philipp Hagemeister
cd9ad1d7e8 [swfinterp] Basic support for constants (only ints for now) 2014-11-17 07:14:02 +01:00
Philipp Hagemeister
162f54eca6 [swfinterp] Implement bitand and pushshort operators 2014-11-17 05:08:39 +01:00
Philipp Hagemeister
33a266f4ba [swfinterp] Implement charCodeAt 2014-11-17 05:03:46 +01:00
Philipp Hagemeister
6b592d93a2 [swfinterp] Formalize built-in classes 2014-11-17 04:54:54 +01:00
Philipp Hagemeister
4686ae4b64 [swfinterp] Implement various opcodes 2014-11-17 04:45:12 +01:00
Philipp Hagemeister
8d05f2c16a [swfinterp] Add support for void methods 2014-11-17 04:36:23 +01:00
Philipp Hagemeister
a4bb83956c [swfinterp] Implement pushtrue and pushfalse opcodes 2014-11-17 04:29:34 +01:00
Philipp Hagemeister
eb5376044c [swfinterp] Implement equals opcode 2014-11-17 04:27:51 +01:00
Philipp Hagemeister
3cbcff8a2d [swfinterp] Implement String basics 2014-11-17 04:25:10 +01:00
Philipp Hagemeister
e983cf5277 [swfinterp] Interpret yet more opcodes 2014-11-17 04:00:41 +01:00
Philipp Hagemeister
0ab1ca5501 [swfinterp] Better error message 2014-11-17 03:53:32 +01:00
Philipp Hagemeister
4baafa229d [swfinterp] Intepret more multinames 2014-11-17 03:46:23 +01:00
Philipp Hagemeister
7f3e33a147 [swfinterp] Implement member assignment 2014-11-17 01:27:34 +01:00
Philipp Hagemeister
b7558d9881 [swfinterp] Allow function patching 2014-11-17 01:27:15 +01:00
Philipp Hagemeister
a0f59cdcb4 [tests] Modernize 2014-11-16 15:17:48 +01:00
Philipp Hagemeister
a4bc433619 [__init__] Modernize 2014-11-16 15:08:34 +01:00
Philipp Hagemeister
b6b70730bf [downloader/common] Modernize 2014-11-16 15:06:59 +01:00
Philipp Hagemeister
6a68bb574a [eporner] Fix duration (Closes #4188) 2014-11-16 14:55:22 +01:00
Philipp Hagemeister
0cf166ad4f release 2014.11.16 2014-11-16 00:51:46 +01:00
Philipp Hagemeister
2707b50ffe [spiegel] Correct handling of redirects to spiegel.tv (Closes #4211) 2014-11-16 00:51:31 +01:00
Philipp Hagemeister
939fe70de0 [spiegeltv] Match hash-style URLs (Closes #4210) 2014-11-16 00:40:09 +01:00
Philipp Hagemeister
89c15fe0b3 [spiegeltv] Modernize 2014-11-16 00:33:51 +01:00
Jaime Marquínez Ferrándiz
ec5f601670 [utils] Fix "write_json_file" for unicode names in python 2.x (fixes #4125) 2014-11-15 22:00:32 +01:00
Naglis Jonaitis
8caa0c9779 [bliptv] Fix the resolve of lookup ID (Closes #4197) 2014-11-15 16:56:04 +02:00
Philipp Hagemeister
e2548b5b25 release 2014.11.15.1 2014-11-15 15:21:50 +01:00
Philipp Hagemeister
bbefcf04bf [goldenmoustache] Fix title (Closes #4203) 2014-11-15 15:21:34 +01:00
Philipp Hagemeister
c7b0add86f [compat] Work around kwargs bugs in old 2.6 Python releases (Fixes #3813) 2014-11-15 15:17:19 +01:00
Philipp Hagemeister
a0155d93d9 release 2014.11.15 2014-11-15 11:01:54 +01:00
Philipp Hagemeister
00d9ef0b70 [mailru] Adapt to new data format (Fixes #4201) 2014-11-15 11:01:17 +01:00
Philipp Hagemeister
0cc8888038 [crunchyroll] Remove NOP code (#2782) 2014-11-15 00:34:03 +01:00
Philipp Hagemeister
c735450e07 release 2014.11.14 2014-11-14 22:27:56 +01:00
Jaime Marquínez Ferrándiz
71f8c7ce7a [mtvservices:embedded] Improve config url (fixes #4092) 2014-11-14 19:02:18 +01:00
Jaime Marquínez Ferrándiz
5fee0eeac0 [ComedyCentralShows] Use the rtmp urls transform function from the MTV IE (fixes #3364)
It produces the right mp4 urls, so we stop prefering the rtmp urls.
2014-11-14 18:36:04 +01:00
Philipp Hagemeister
eb4157fd17 [utils] Fix struct.pack call on very old Python versions (#4181) 2014-11-14 00:39:32 +01:00
Philipp Hagemeister
69ede8ef81 release 2014.11.13.3 2014-11-13 16:28:24 +01:00
Philipp Hagemeister
609a61e3e6 [npo] Improve npo.nl (Fixes #4173) 2014-11-13 16:28:05 +01:00
Philipp Hagemeister
bf951c5e29 release 2014.11.13.2 2014-11-13 16:12:54 +01:00
Philipp Hagemeister
af63fed7d8 [generic] Add support for livestream embeds (Fixes #4185) 2014-11-13 16:12:51 +01:00
Philipp Hagemeister
68d1d41c03 Credit @yaccz for freevideo (#4131) 2014-11-13 15:59:48 +01:00
Philipp Hagemeister
3deed1e91a [freevideo] Simplify and raise error for foreigners (Fixes #4131) 2014-11-13 15:59:22 +01:00
Philipp Hagemeister
11b28e93d3 Merge remote-tracking branch 'yaccz/add-extractor/freevideo' 2014-11-13 15:53:16 +01:00
Philipp Hagemeister
c3d582985f release 2014.11.13.1 2014-11-13 15:42:48 +01:00
Philipp Hagemeister
4c0924bb24 [utils] Fix intlist_to_bytes in Python 2 (#4181) 2014-11-13 15:28:42 +01:00
Philipp Hagemeister
3fa5bb3802 [sexu] Modernize (#4171) 2014-11-13 15:20:49 +01:00
Philipp Hagemeister
c47ec62b83 Merge remote-tracking branch 'peugeot/sexu' 2014-11-13 15:18:38 +01:00
Philipp Hagemeister
e4bdb37ec6 [spiegel] Add support for embeds 2014-11-13 15:02:31 +01:00
Philipp Hagemeister
3e6e4999ca [test/helper] Improve output 2014-11-13 14:55:45 +01:00
Philipp Hagemeister
0e15e725a0 [spiegel] Modernize 2014-11-13 14:45:17 +01:00
peugeot
437f68d868 Update sexu.py 2014-11-13 14:02:53 +01:00
peugeot
d91d124081 fix python 2 test 2014-11-13 13:57:10 +01:00
Philipp Hagemeister
2d42905b68 release 2014.11.13 2014-11-13 09:57:58 +01:00
Jaime Marquínez Ferrándiz
cbe71cb41d Merge pull request #4178 from awojnowski/master
Fix YouTube Signature Extraction
2014-11-13 08:24:29 +01:00
Aaron Wojnowski
894dd8682e Fix YouTube signature extraction. 2014-11-13 00:33:27 -06:00
Jaime Marquínez Ferrándiz
9e05d039e0 [dailymotion] Fix extraction of vevo videos (fixes #4168) 2014-11-12 23:32:27 +01:00
peugeot
bbd5f2de5e [sexu] initial support 2014-11-12 20:41:13 +01:00
Naglis Jonaitis
73689dafbf [tvplay] Fix f4m URL extraction (Closes #4119)
Add query parameters which are needed by AkamaiHD F4M player.
Also, modernize a bit.
2014-11-12 19:26:00 +02:00
Philipp Hagemeister
4b50ba0989 Credit @xantares for goldenmoustache (#4128) 2014-11-12 15:53:00 +01:00
Philipp Hagemeister
5ccaddf5b1 [goldenmoustache] Simplify (#4128) 2014-11-12 15:36:59 +01:00
Philipp Hagemeister
0b201a3134 Merge remote-tracking branch 'xantares/goldenmoustache' 2014-11-12 15:34:31 +01:00
Philipp Hagemeister
ffe38646ca [funnyordie] Remove test md5sum (Fixes #4113) 2014-11-12 15:33:15 +01:00
Philipp Hagemeister
b703ab4d7f Merge remote-tracking branch 'michael-k/links' 2014-11-12 15:31:54 +01:00
Philipp Hagemeister
c6afed48ff [YoutubeDL] guard against strange sys.stdouts 2014-11-12 15:30:26 +01:00
Michael Käufl
732c848c14 [abc] Update test case
Old video has expired.
2014-11-12 15:26:29 +01:00
Michael Käufl
9d2a4dae90 [allocine] Update test 2014-11-12 15:26:09 +01:00
Michael Käufl
7009a9047a [byutv] Update test 2014-11-12 15:24:37 +01:00
Michael Käufl
498942f187 [test_youtube_signature] Fix import
Broken in commit 8c25f81bee
2014-11-12 15:23:55 +01:00
Philipp Hagemeister
28465df1ff [youjizz] Modernize (#4131) 2014-11-12 15:19:23 +01:00
Philipp Hagemeister
ef89dba58f [myspass] Modernize test case 2014-11-12 15:01:52 +01:00
Philipp Hagemeister
13ba3a6461 [bandcamp:album] Fix test case 2014-11-12 15:00:54 +01:00
Philipp Hagemeister
8f6ec4bbe6 release 2014.11.12.1 2014-11-12 11:44:26 +01:00
Jaime Marquínez Ferrándiz
c295490830 [YoutubeDL] Fix bug in the detection of formats that don't contain video (fixes #4150)
If the format requested was not available, we called the method '.get' in None.
2014-11-12 09:42:35 +01:00
Jaime Marquínez Ferrándiz
eb4cb42a02 [ted] Extract duration (closes #4155) 2014-11-12 09:30:57 +01:00
Philipp Hagemeister
7a8cbc72b2 release 2014.11.12 2014-11-12 08:46:34 +01:00
Pascal Brax
2774852c2f Fix MTV/GameTrailers "Bad Request" error
Bugfix for bug #4123 & #4153
2014-11-12 01:10:08 +01:00
Naglis Jonaitis
bbcc21efd1 [wrzuta] Fallback to mp3 on unknown media type (#4156) 2014-11-11 16:47:54 +02:00
Naglis Jonaitis
60526d6bcb [wrzuta] Fix audio extension lookup (Closes #4156)
Also, replace deleted test case
2014-11-11 16:23:06 +02:00
Philipp Hagemeister
1d4df56d09 release 2014.11.09 2014-11-09 22:32:41 +01:00
Philipp Hagemeister
a1cf99d03a [YoutubeDL] Add playlist_id and playlist_title fields (Fixes #4139) 2014-11-09 22:32:35 +01:00
Naglis Jonaitis
3c6af203cc [streamcloud] Match URLs without fname (Closes #4144)
Also, modernize a bit.
2014-11-09 22:00:51 +02:00
Naglis Jonaitis
1a92e086a7 [tapely] Add Referer header (Closes #4138) 2014-11-09 15:01:12 +02:00
Jaime Marquínez Ferrándiz
519c73f267 Merge pull request #4136 from andikmu/master
fix swrmediathek for new formats.
2014-11-09 12:17:18 +01:00
Jaime Marquínez Ferrándiz
a6dae6c09c [ndr] Improve video url regex (fixes #4140) 2014-11-09 11:15:50 +01:00
Jaime Marquínez Ferrándiz
f866e474f3 [YoutubeDL] Don't dowload formats for merging if the first doesn't contain the video (#4132) 2014-11-09 10:59:56 +01:00
Philipp Hagemeister
8bb9b97c97 Merge remote-tracking branch 'origin/master' 2014-11-09 08:30:12 +01:00
andi
d6fdc38682 fix swrmediathek for new formats. 2014-11-08 15:56:35 +01:00
Jaime Marquínez Ferrándiz
c2b61af548 [options] Document the syntax for merging formats (closes #3940, closes #4132) 2014-11-08 15:09:04 +01:00
Jaime Marquínez Ferrándiz
2fdbf27ad8 [niconico:playlist] Use the same video url the webpage uses (closes #4133) 2014-11-08 14:53:23 +01:00
yac
3898c8a7b2 [FreeVideo] Add new extractor 2014-11-08 00:13:28 +01:00
Naglis Jonaitis
29ed169cd6 [wrzuta] Add mp3 as a possible format (Closes #4126) 2014-11-07 22:53:54 +02:00
xantares
b868c972d1 Add support for goldenmoustache.com 2014-11-07 17:44:06 +00:00
Jaime Marquínez Ferrándiz
9908e03528 Merge pull request #4076 from ghedo/direct_type
[generic] indicate when a direct video has been extracted
2014-11-06 22:23:14 +01:00
Jaime Marquínez Ferrándiz
1fe8fb8c20 [vice] Re-add extractor (fixes #4120)
The generic extraction no longer works.
2014-11-06 21:44:07 +01:00
Naglis Jonaitis
5d63b0aa93 [goshgay] Fix title extraction and modernize
Also remove width and height as they are not of the actual video.
2014-11-06 01:19:20 +02:00
Philipp Hagemeister
4164f0117e [utils] Remove unused import 2014-11-05 23:56:54 +01:00
Naglis Jonaitis
37aab27808 [brightcove] Extract m3u8 formats (#3541) 2014-11-06 00:14:33 +02:00
Jaime Marquínez Ferrándiz
6110bbbfdd [niconico] Catch deleted videos (closes #4064) 2014-11-05 19:52:34 +01:00
Jaime Marquínez Ferrándiz
cde9b380e6 Merge pull request #4110 from nemunaire/channel9-fix
[channel9] Fix extraction
2014-11-05 19:03:24 +01:00
Sergey M․
dab647a7b6 [cinemassacre] Keep both extraction approaches and make more robust (Closes #4109) 2014-11-05 21:32:46 +07:00
nemunaire
a316a83d2b [channel9] Fix extraction 2014-11-05 11:23:11 +01:00
Naglis Jonaitis
81b22aee8b [izlesene] Update test cases and modernize
The timestamp fluctuates with DST.
2014-11-05 01:00:33 +02:00
Philipp Hagemeister
a80c96eab0 release 2014.11.04 2014-11-04 23:42:09 +01:00
Philipp Hagemeister
20436c30c9 [youtube] Clarify output 2014-11-04 23:35:34 +01:00
Philipp Hagemeister
3828505646 [utils] Use a regexp instead of HTMLParser for get_element_by_attribute 2014-11-04 23:33:43 +01:00
Philipp Hagemeister
11fba1751d [imdb] Simplify 2014-11-04 23:26:23 +01:00
Philipp Hagemeister
12ea2f30cf [utils] Remove unused get_meta_content function 2014-11-04 23:20:39 +01:00
Philipp Hagemeister
9c3e870393 [gamespot] Remove unused import 2014-11-04 23:17:43 +01:00
Philipp Hagemeister
44789f2457 [ustream] Use modern helper function instead of old HTML parser 2014-11-04 23:15:16 +01:00
Philipp Hagemeister
711ede6e1b [heise] Fix description, thumbnail and format ID 2014-11-04 23:14:16 +01:00
Philipp Hagemeister
a32f253112 [gamespot] Modernize 2014-11-04 23:04:12 +01:00
Philipp Hagemeister
94bd361318 [youtube] Skip sts if missing (Fixes #4095, fixes #4103) 2014-11-04 22:45:43 +01:00
Philipp Hagemeister
acd40f64ed [cnn] Modernize test definitions 2014-11-04 22:25:15 +01:00
Sergey M․
766306450d [played] Capture and output error message 2014-11-04 17:34:53 +07:00
Sergey M․
e7642ab572 [wimp] Fix video URL regex 2014-11-04 17:13:17 +07:00
Naglis Jonaitis
bdf9701729 [generic/brightcove] Add a new test case for kijk.nl (#3541) 2014-11-03 23:13:46 +02:00
Naglis Jonaitis
b5af6fcdad [brightcove] Make _VALID_URL less greedy and check for empty URLs (#3541) 2014-11-03 23:12:24 +02:00
Philipp Hagemeister
278143df5b [test_compat] Ignore unicode_literals 2014-11-03 19:12:06 +01:00
Sergey M․
fdca55fe34 [trutube] Strip title 2014-11-03 20:14:18 +07:00
Jaime Marquínez Ferrándiz
4f195f55f0 Do not override stdlib html parser 'locatestarttagend' regex (fixes #4081)
'<a href="foo" ><img src="bar" / ></a>' wouldn't be parsed right (the problem is '/ >', '/>' worked fine).
We need to change it in python 2.6 (for example the description of youtube videos wouldn't be extracted).
2014-11-02 19:31:06 +01:00
Jaime Marquínez Ferrándiz
ac35c26686 [tests] Don't auto init YoutubeDL
It would print the debug headers for each test.
And nose uses a StringIO object for stdout, which in python 2.x doesn't have the 'encoding' attribute.
2014-11-02 17:53:12 +01:00
Michael Käufl
982a58d049 [README] Replace links to kernel.org with links to git-scm.com
Unlike kernel.org, the documentation at git-scm.com is up to date and
the rest of the git documentation is easily accessible to any git
newby.
2014-11-02 16:07:40 +01:00
Philipp Hagemeister
42f7d2f588 [test_download] Fix import 2014-11-02 11:46:12 +01:00
Philipp Hagemeister
39f0a2a6b7 [test_swfinterp] Correct compilation on modern mxmlc versions 2014-11-02 11:41:33 +01:00
Philipp Hagemeister
ecc0c5ee01 [utils] Modernize 2014-11-02 11:37:49 +01:00
Philipp Hagemeister
451948b28c [compat] Modernize 2014-11-02 11:36:29 +01:00
Philipp Hagemeister
baa708036c [compat] Fix imports 2014-11-02 11:26:40 +01:00
Philipp Hagemeister
8c25f81bee [util] Move compatibility functions out of util
utils is large enough without these compatibility functions.

Everything that is present in newer versions of Python (i.e. with dev Python it's just an import) goes into compat.py .
Everything else (i.e. youtube-dl-specific helpers) goes into utils.py .
2014-11-02 11:23:42 +01:00
Philipp Hagemeister
4c83c96795 [YoutubeDL] Include rtmpdump in exe versions -v output 2014-11-02 10:55:36 +01:00
Philipp Hagemeister
9580711841 [ffmpeg] Move version detection to utils 2014-11-02 10:50:30 +01:00
Philipp Hagemeister
c30ae9594c release 2014.11.02.1 2014-11-02 10:28:21 +01:00
Philipp Hagemeister
ffae28ae18 release 2014.11.02 2014-11-02 09:45:51 +01:00
Sergey M․
d9116714f2 [cinemassacre] Fix extraction (Closes #4083) 2014-11-02 08:01:14 +07:00
Philipp Hagemeister
08965906a8 [README] Update FAQ on Ubuntu (#4078) 2014-11-01 19:24:56 +01:00
Alessandro Ghedini
ccdd0ffb80 [generic] indicate when a direct video has been extracted
Fixes #4052.
2014-11-01 15:34:00 +01:00
Sergey M․
5263cdfcf9 [generic] Improve MLB iframe regex 2014-11-01 04:01:58 +07:00
Sergey M․
b2a68d14cf [mlb] Improve _VALID_URL (Closes #4063) 2014-11-01 04:01:18 +07:00
Sergey M․
6e1cff9c33 [canalplus] Improve and merge with d8 extractor 2014-10-31 21:54:30 +07:00
Sergey M․
72975729c8 [canalplus] Tweak extractor to support piwiplus (Closes #4046) 2014-10-31 20:19:30 +07:00
Sergey M․
d319948b6a [funnyordie] Add articles URL test 2014-10-31 19:26:56 +07:00
Sergey M.
9a4bf889f9 Merge pull request #4069 from anovicecodemonkey/support_funnyordie_articles_urls
[FunnyOrDie] Add support for "/articles/" URLs
2014-10-31 18:25:22 +05:00
anovicecodemonkey
2a834bdb21 [FunnyOrDie] Add support for "/articles/" URLs 2014-10-31 21:20:37 +10:30
Philipp Hagemeister
0d2c141865 [youtube] Detect formats 298 et al as mp4 (Fixes #4066) 2014-10-31 11:13:02 +01:00
Philipp Hagemeister
5ec39d8b96 release 2014.10.30 2014-10-30 09:53:48 +01:00
Philipp Hagemeister
7b6de3728a [youtube] Add format 266 (Fixes #4055) 2014-10-30 09:53:43 +01:00
Philipp Hagemeister
a51d3aa001 [youtube] Add support for formats 302 and 303 (Fixes #4060) 2014-10-30 09:43:11 +01:00
Philipp Hagemeister
2c8e03d937 Sort formats by fps as well 2014-10-30 09:40:52 +01:00
Philipp Hagemeister
fbb21cf528 [youtube] Add formats 298, 299 (Fixes #4056) 2014-10-30 09:34:13 +01:00
Naglis Jonaitis
b8a618f898 [ro220] Fix broken extractor and modernize (#4054) 2014-10-30 01:42:52 +02:00
Philipp Hagemeister
feb74960eb release 2014.10.29 2014-10-29 23:29:42 +01:00
Jaime Marquínez Ferrándiz
d65d628613 [crunchycroll] Fix building of ass subtitles (reported in #4019)
Parse the xml document instead of using regexes, otherwise unicode characters are left unescaped.
2014-10-29 21:19:20 +01:00
Philipp Hagemeister
ac645ac7d0 [generic] Allow soundcloud embeds with additional attributes 2014-10-29 20:27:58 +01:00
Philipp Hagemeister
7d11297f3f Merge branch 'master' of github.com:rg3/youtube-dl 2014-10-29 20:10:07 +01:00
Philipp Hagemeister
6ad4013d40 [drtv] Allow fractional timestamps (Fixes #4059) 2014-10-29 20:10:00 +01:00
Sergey M․
dbd1283d31 [naver] Capture and output error message (#4057) 2014-10-29 21:50:37 +07:00
Sergey M․
c451d4f553 [trutube] Fix extraction 2014-10-29 21:16:10 +07:00
Jaime Marquínez Ferrándiz
8abec2c8bb [test_utils] Fix compat_getenv and compat_expanduser tests on python 3.x 2014-10-29 11:13:34 +01:00
Jaime Marquínez Ferrándiz
a9bad429b3 [niconico] Add extractor for playlists (closes #4043) 2014-10-29 11:04:48 +01:00
Philipp Hagemeister
50c8266ef0 Merge branch 'master' of github.com:rg3/youtube-dl 2014-10-28 23:40:44 +01:00
Philipp Hagemeister
00edd4f9be [laola1tv] Mark as broken
When the f4m downloader gets live stream support, I expect this to work magically or with very minor changes.
2014-10-28 17:29:27 +01:00
Philipp Hagemeister
ee966928af [f4m] Support bootstrap URLs 2014-10-28 17:27:41 +01:00
Philipp Hagemeister
e5193599ec [laola1tv] Add new extractor
The extractor works fine, but the f4m downloader cannot handle the resulting bootstrap information.
2014-10-28 16:51:34 +01:00
Philipp Hagemeister
01d663bca3 [auengine] Simplify 2014-10-28 15:51:15 +01:00
Sergey M․
e0c51cdadc [vk] Generalize errors 2014-10-28 21:35:25 +07:00
Sergey M․
9334f8f17a [vk] Handle deleted videos 2014-10-28 21:06:07 +07:00
Sergey M․
632256d9ec [wimp] Update video URL regex 2014-10-28 20:35:02 +07:00
Philipp Hagemeister
03df7baa6a Start documentation on how to embed youtube-dl 2014-10-28 12:54:39 +01:00
Philipp Hagemeister
3511266bc3 [YoutubeDL] Simplify API of YoutubeDL
Calling add_default_extractors twice should be harmless since the first set of extractors will match.
2014-10-28 12:54:29 +01:00
Philipp Hagemeister
9fdece5d34 [srmediathek] Choose variable name more wisely 2014-10-28 10:44:47 +01:00
Philipp Hagemeister
bbf1092ad0 [fktv] Remove unused import 2014-10-28 10:44:17 +01:00
Philipp Hagemeister
9ef55c5bbc [quickvid] Add new extractor 2014-10-28 10:41:37 +01:00
Philipp Hagemeister
48a24ab746 [generic] Fix HTML5 video regexp 2014-10-28 10:41:24 +01:00
Philipp Hagemeister
68acdbda9d Merge remote-tracking branch 'origin/master' 2014-10-28 09:13:23 +01:00
Philipp Hagemeister
27c542c06f [iconosquare] Simplify 2014-10-28 09:12:28 +01:00
Naglis Jonaitis
aaa399d2f6 [sphinx] Fix version import 2014-10-27 18:49:48 +02:00
Philipp Hagemeister
b2e6a1c14c release 2014.10.27 2014-10-27 02:44:07 +01:00
Philipp Hagemeister
8cc3eba79a [phoenix] Add new extractor (Fixes #4036) 2014-10-27 02:43:59 +01:00
Philipp Hagemeister
b0fb6d4db1 [ku6] Modernize 2014-10-27 02:32:44 +01:00
Philipp Hagemeister
81515ad9f6 [extractor/common] Improve m3u8 output 2014-10-27 02:28:37 +01:00
Philipp Hagemeister
8112d4b284 [lrt] Modernize 2014-10-27 02:27:49 +01:00
Philipp Hagemeister
bf7aa6301b [fktv] Modernize 2014-10-27 02:26:05 +01:00
Philipp Hagemeister
aea856621f [zdf] Simplify 2014-10-27 02:14:07 +01:00
Philipp Hagemeister
f24a5a2faa Merge remote-tracking branch 'olebowle/ard' 2014-10-27 01:36:50 +01:00
Philipp Hagemeister
ecfe623422 [heise] Fix extraction
Now they use an XML format instead of JSON.
2014-10-27 01:33:51 +01:00
Philipp Hagemeister
4a6c94288a [kickstarter] Simplify and fix test case 2014-10-27 01:16:18 +01:00
Philipp Hagemeister
10e3d73472 [nbc] Fix ThePlatform embedded videos 2014-10-27 01:14:17 +01:00
Philipp Hagemeister
15956b5aa1 [promptfile] Fix check for deleted videos 2014-10-27 00:50:22 +01:00
Philipp Hagemeister
586f7082ef [francetv] Remove changing md5sum 2014-10-27 00:46:34 +01:00
Philipp Hagemeister
d6d9186f0d [generic] Fix test title 2014-10-27 00:45:15 +01:00
Philipp Hagemeister
2e9ff8f362 [gorillavid] Fix test title 2014-10-27 00:44:27 +01:00
Philipp Hagemeister
6407432333 [Makefile] remove temporary files in clean target 2014-10-27 00:40:07 +01:00
Philipp Hagemeister
f744c0f398 [test_download] Improve error message 2014-10-27 00:39:39 +01:00
Philipp Hagemeister
249efaf44b [pornhub] Modernize and fix test definition 2014-10-27 00:33:35 +01:00
Philipp Hagemeister
8d32abff9e [ruhd] Simplify 2014-10-27 00:20:54 +01:00
Philipp Hagemeister
94f052cbf4 [syfy] Remove test checksum
We have the minsize test now.
2014-10-27 00:19:15 +01:00
Philipp Hagemeister
446a03bd96 [ustream:channel] Change test playlist size (Seems to have been limited that way on the website as well) 2014-10-27 00:18:10 +01:00
Philipp Hagemeister
6009b69f81 [vgtv] Fix test title 2014-10-27 00:16:01 +01:00
Philipp Hagemeister
3d6047113c [vgtv] Simplify 2014-10-27 00:14:52 +01:00
Philipp Hagemeister
9dec99303d [vimeo:review] Fix test title 2014-10-27 00:13:40 +01:00
Philipp Hagemeister
7706927370 [vine:user] Adapt test to changed list size 2014-10-27 00:11:34 +01:00
Philipp Hagemeister
3adba6fa2a [xtube] Fix test description 2014-10-27 00:08:37 +01:00
Philipp Hagemeister
f46a8702cc [youtube:playlist] Fix test title 2014-10-27 00:06:47 +01:00
Philipp Hagemeister
8d11b59bbb [ynet] Remove test md5sums
These fluctuate regularly.
2014-10-27 00:06:00 +01:00
Philipp Hagemeister
cf501a23d2 [srmediathek] Correct IE_NAME/IE_DESC 2014-10-26 23:23:53 +01:00
Philipp Hagemeister
2bcae58d46 [srmediathek] New extractor 2014-10-26 23:23:10 +01:00
Philipp Hagemeister
c9f08154a3 Remove unused imports 2014-10-26 23:13:42 +01:00
Philipp Hagemeister
526b276fd7 [faz] Modernize 2014-10-26 23:11:15 +01:00
Philipp Hagemeister
77ec444d9a release 2014.10.26.2 2014-10-26 21:49:52 +01:00
Philipp Hagemeister
bfc2bedcfc [youtube] Make confirm_age non-fatal (#4042) 2014-10-26 21:49:29 +01:00
Philipp Hagemeister
83855f3a1f [livestream:original] Fix RTMP parameters (Fixes #4040) 2014-10-26 21:44:29 +01:00
Philipp Hagemeister
50b51830fb [ffmpeg] Fix typo 2014-10-26 21:31:51 +01:00
Philipp Hagemeister
3d6eed9b52 release 2014.10.26.1 2014-10-26 21:03:38 +01:00
Philipp Hagemeister
1a253e134c [ffmpeg] Fix call to ffprobe (Fixes #4041) 2014-10-26 21:03:16 +01:00
Philipp Hagemeister
6194bb1419 [ffmpeg] Make downloader optional (Fixes #4039) 2014-10-26 21:00:42 +01:00
Philipp Hagemeister
37d66e7f1e [generic] Correct call to _webpage_read_full_content 2014-10-26 20:58:09 +01:00
Philipp Hagemeister
70b7e3fbb6 [generic] Add a test case for direct links with broken HEAD (#4032) 2014-10-26 20:49:51 +01:00
Jaime Marquínez Ferrándiz
579657ad87 [soundcloud] Set the 'webpage_url' field for each track
For playlists, YoutubeDL would set it to the playlist url.
2014-10-26 19:08:36 +01:00
Jaime Marquínez Ferrándiz
5f82b129e0 [ffmpeg] Also look into stderr for extracting the version
At least with avconv 11, it will print 'avconv version 11, ..' to stderr, not stdout.
2014-10-26 18:11:31 +01:00
Philipp Hagemeister
64269e4d01 Move AUTHORS to root (closes #2985) 2014-10-26 18:01:00 +01:00
Ole Ernst
bfd91588f3 [ard] make rss match more universal 2014-10-22 14:24:53 +02:00
Ole Ernst
3741302a10 [ard] Add rss support 2014-10-10 20:35:34 +02:00
134 changed files with 2553 additions and 1425 deletions

84
AUTHORS Normal file
View File

@@ -0,0 +1,84 @@
Ricardo Garcia Gonzalez
Danny Colligan
Benjamin Johnson
Vasyl' Vavrychuk
Witold Baryluk
Paweł Paprota
Gergely Imreh
Rogério Brito
Philipp Hagemeister
Sören Schulze
Kevin Ngo
Ori Avtalion
shizeeg
Filippo Valsorda
Christian Albrecht
Dave Vasilevsky
Jaime Marquínez Ferrándiz
Jeff Crouse
Osama Khalid
Michael Walter
M. Yasoob Ullah Khalid
Julien Fraichard
Johny Mo Swag
Axel Noack
Albert Kim
Pierre Rudloff
Huarong Huo
Ismael Mejía
Steffan 'Ruirize' James
Andras Elso
Jelle van der Waa
Marcin Cieślak
Anton Larionov
Takuya Tsuchida
Sergey M.
Michael Orlitzky
Chris Gahan
Saimadhav Heblikar
Mike Col
Oleg Prutz
pulpe
Andreas Schmitz
Michael Kaiser
Niklas Laxström
David Triendl
Anthony Weems
David Wagner
Juan C. Olivares
Mattias Harrysson
phaer
Sainyam Kapoor
Nicolas Évrard
Jason Normore
Hoje Lee
Adam Thalhammer
Georg Jähnig
Ralf Haring
Koki Takahashi
Ariset Llerena
Adam Malcontenti-Wilson
Tobias Bell
Naglis Jonaitis
Charles Chen
Hassaan Ali
Dobrosław Żybort
David Fabijan
Sebastian Haas
Alexander Kirk
Erik Johnson
Keith Beckman
Ole Ernst
Aaron McDaniel (mcd1992)
Magnus Kolstad
Hari Padmanaban
Carlos Ramos
5moufl
lenaten
Dennis Scheiba
Damon Timm
winwon
Xavier Beynon
Gabriel Schubiner
xantares
Jan Matějka

View File

@@ -1,7 +1,7 @@
all: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish all: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish
clean: clean:
rm -rf youtube-dl.1.temp.md youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dl.tar.gz youtube-dl.zsh youtube-dl.fish rm -rf youtube-dl.1.temp.md youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dl.tar.gz youtube-dl.zsh youtube-dl.fish *.dump *.part
cleanall: clean cleanall: clean
rm -f youtube-dl youtube-dl.exe rm -f youtube-dl youtube-dl.exe

View File

@@ -131,17 +131,19 @@ which means you can modify it, redistribute it or use it however you like.
%(upload_date)s for the upload date %(upload_date)s for the upload date
(YYYYMMDD), %(extractor)s for the provider (YYYYMMDD), %(extractor)s for the provider
(youtube, metacafe, etc), %(id)s for the (youtube, metacafe, etc), %(id)s for the
video id, %(playlist)s for the playlist the video id, %(playlist_title)s,
%(playlist_id)s, or %(playlist)s (=title if
present, ID otherwise) for the playlist the
video is in, %(playlist_index)s for the video is in, %(playlist_index)s for the
position in the playlist and %% for a position in the playlist. %(height)s and
literal percent. %(height)s and %(width)s %(width)s for the width and height of the
for the width and height of the video video format. %(resolution)s for a textual
format. %(resolution)s for a textual
description of the resolution of the video description of the resolution of the video
format. Use - to output to stdout. Can also format. %% for a literal percent. Use - to
be used to download to a different output to stdout. Can also be used to
directory, for example with -o '/my/downloa download to a different directory, for
ds/%(uploader)s/%(title)s-%(id)s.%(ext)s' . example with -o '/my/downloads/%(uploader)s
/%(title)s-%(id)s.%(ext)s' .
--autonumber-size NUMBER Specifies the number of digits in --autonumber-size NUMBER Specifies the number of digits in
%(autonumber)s when it is present in output %(autonumber)s when it is present in output
filename template or --auto-number option filename template or --auto-number option
@@ -239,8 +241,13 @@ which means you can modify it, redistribute it or use it however you like.
"worst", "worstvideo" and "worstaudio". By "worst", "worstvideo" and "worstaudio". By
default, youtube-dl will pick the best default, youtube-dl will pick the best
quality. Use commas to download multiple quality. Use commas to download multiple
audio formats, such as -f audio formats, such as -f
136/137/mp4/bestvideo,140/m4a/bestaudio 136/137/mp4/bestvideo,140/m4a/bestaudio.
You can merge the video and audio of two
formats into a single file using -f <video-
format>+<audio-format> (requires ffmpeg or
avconv), for example -f
bestvideo+bestaudio.
--all-formats download all available video formats --all-formats download all available video formats
--prefer-free-formats prefer free video formats unless a specific --prefer-free-formats prefer free video formats unless a specific
one is requested one is requested
@@ -381,7 +388,7 @@ Again, from then on you'll be able to update with `sudo youtube-dl -U`.
YouTube changed their playlist format in March 2014 and later on, so you'll need at least youtube-dl 2014.07.25 to download all YouTube videos. YouTube changed their playlist format in March 2014 and later on, so you'll need at least youtube-dl 2014.07.25 to download all YouTube videos.
If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to report bugs to the Ubuntu packaging guys - all they have to do is update the package to a somewhat recent version. See above for a way to update. If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to [report bugs](https://bugs.launchpad.net/ubuntu/+source/youtube-dl/+filebug) to the [Ubuntu packaging guys](mailto:ubuntu-motu@lists.ubuntu.com?subject=outdated%20version%20of%20youtube-dl) - all they have to do is update the package to a somewhat recent version. See above for a way to update.
### Do I always have to pass in `--max-quality FORMAT`, or `-citw`? ### Do I always have to pass in `--max-quality FORMAT`, or `-citw`?
@@ -500,7 +507,7 @@ If you want to add support for a new site, you can follow this quick list (assum
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will be then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will be then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc.
7. Have a look at [`youtube_dl/common/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L38). Add tests and code for as many as you want. 7. Have a look at [`youtube_dl/common/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L38). Add tests and code for as many as you want.
8. If you can, check the code with [pyflakes](https://pypi.python.org/pypi/pyflakes) (a good idea) and [pep8](https://pypi.python.org/pypi/pep8) (optional, ignore E501). 8. If you can, check the code with [pyflakes](https://pypi.python.org/pypi/pyflakes) (a good idea) and [pep8](https://pypi.python.org/pypi/pep8) (optional, ignore E501).
9. When the tests pass, [add](https://www.kernel.org/pub/software/scm/git/docs/git-add.html) the new files and [commit](https://www.kernel.org/pub/software/scm/git/docs/git-commit.html) them and [push](https://www.kernel.org/pub/software/scm/git/docs/git-push.html) the result, like this: 9. When the tests pass, [add](http://git-scm.com/docs/git-add) the new files and [commit](http://git-scm.com/docs/git-commit) them and [push](http://git-scm.com/docs/git-push) the result, like this:
$ git add youtube_dl/extractor/__init__.py $ git add youtube_dl/extractor/__init__.py
$ git add youtube_dl/extractor/yourextractor.py $ git add youtube_dl/extractor/yourextractor.py
@@ -511,6 +518,20 @@ If you want to add support for a new site, you can follow this quick list (assum
In any case, thank you very much for your contributions! In any case, thank you very much for your contributions!
# EMBEDDING YOUTUBE-DL
youtube-dl makes the best effort to be a good command-line program, and thus should be callable from any programming language. If you encounter any problems parsing its output, feel free to [create a report](https://github.com/rg3/youtube-dl/issues/new).
From a Python program, you can embed youtube-dl in a more powerful fashion, like this:
import youtube_dl
ydl_opts = {}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
Most likely, you'll want to use various options. For a list of what can be done, have a look at [youtube_dl/YoutubeDL.py](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L69). For a start, if you want to intercept youtube-dl's output, set a `logger` object.
# BUGS # BUGS
Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues> . Unless you were prompted so or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues> . Unless you were prompted so or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email.

View File

@@ -44,8 +44,8 @@ copyright = u'2014, Ricardo Garcia Gonzalez'
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
import youtube_dl from youtube_dl.version import __version__
version = youtube_dl.__version__ version = __version__
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = version release = version

View File

@@ -57,7 +57,7 @@ class FakeYDL(YoutubeDL):
# Different instances of the downloader can't share the same dictionary # Different instances of the downloader can't share the same dictionary
# some test set the "sublang" parameter, which would break the md5 checks. # some test set the "sublang" parameter, which would break the md5 checks.
params = get_params(override=override) params = get_params(override=override)
super(FakeYDL, self).__init__(params) super(FakeYDL, self).__init__(params, auto_init=False)
self.result = [] self.result = []
def to_screen(self, s, skip_eol=None): def to_screen(self, s, skip_eol=None):
@@ -145,7 +145,8 @@ def expect_info_dict(self, expected_dict, got_dict):
info_dict_str = ''.join( info_dict_str = ''.join(
' %s: %s,\n' % (_repr(k), _repr(v)) ' %s: %s,\n' % (_repr(k), _repr(v))
for k, v in test_info_dict.items()) for k, v in test_info_dict.items())
write_string('\n"info_dict": {\n' + info_dict_str + '}\n', out=sys.stderr) write_string(
'\n\'info_dict\': {\n' + info_dict_str + '}\n', out=sys.stderr)
self.assertFalse( self.assertFalse(
missing_keys, missing_keys,
'Missing keys in test definition: %s' % ( 'Missing keys in test definition: %s' % (
@@ -171,3 +172,13 @@ def assertGreaterEqual(self, got, expected, msg=None):
if msg is None: if msg is None:
msg = '%r not greater than or equal to %r' % (got, expected) msg = '%r not greater than or equal to %r' % (got, expected)
self.assertTrue(got >= expected, msg) self.assertTrue(got >= expected, msg)
def expect_warnings(ydl, warnings_re):
real_warning = ydl.report_warning
def _report_warning(w):
if not any(re.search(w_re, w) for w_re in warnings_re):
real_warning(w)
ydl.report_warning = _report_warning

View File

@@ -0,0 +1,18 @@
// input: []
// output: 4
package {
public class ConstArrayAccess {
private static const x:int = 2;
private static const ar:Array = ["42", "3411"];
public static function main():int{
var c:ConstArrayAccess = new ConstArrayAccess();
return c.f();
}
public function f(): int {
return ar[1].length;
}
}
}

View File

@@ -0,0 +1,12 @@
// input: []
// output: 2
package {
public class ConstantInt {
private static const x:int = 2;
public static function main():int{
return x;
}
}
}

10
test/swftests/DictCall.as Normal file
View File

@@ -0,0 +1,10 @@
// input: [{"x": 1, "y": 2}]
// output: 3
package {
public class DictCall {
public static function main(d:Object):int{
return d.x + d.y;
}
}
}

View File

@@ -0,0 +1,10 @@
// input: []
// output: false
package {
public class EqualsOperator {
public static function main():Boolean{
return 1 == 2;
}
}
}

View File

@@ -0,0 +1,22 @@
// input: [1]
// output: 2
package {
public class MemberAssignment {
public var v:int;
public function g():int {
return this.v;
}
public function f(a:int):int{
this.v = a;
return this.v + this.g();
}
public static function main(a:int): int {
var v:MemberAssignment = new MemberAssignment();
return v.f(a);
}
}
}

View File

@@ -0,0 +1,24 @@
// input: []
// output: 123
package {
public class NeOperator {
public static function main(): int {
var res:int = 0;
if (1 != 2) {
res += 3;
} else {
res += 4;
}
if (2 != 2) {
res += 10;
} else {
res += 20;
}
if (9 == 9) {
res += 100;
}
return res;
}
}
}

View File

@@ -0,0 +1,22 @@
// input: []
// output: 9
package {
public class PrivateVoidCall {
public static function main():int{
var f:OtherClass = new OtherClass();
f.func();
return 9;
}
}
}
class OtherClass {
private function pf():void {
;
}
public function func():void {
this.pf();
}
}

View File

@@ -0,0 +1,11 @@
// input: []
// output: 3
package {
public class StringBasics {
public static function main():int{
var s:String = "abc";
return s.length;
}
}
}

View File

@@ -0,0 +1,11 @@
// input: []
// output: 9897
package {
public class StringCharCodeAt {
public static function main():int{
var s:String = "abc";
return s.charCodeAt(1) * 100 + s.charCodeAt();
}
}
}

View File

@@ -0,0 +1,11 @@
// input: []
// output: 2
package {
public class StringConversion {
public static function main():int{
var s:String = String(99);
return s.length;
}
}
}

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -19,7 +20,7 @@ def _download_restricted(url, filename, age):
'age_limit': age, 'age_limit': age,
'skip_download': True, 'skip_download': True,
'writeinfojson': True, 'writeinfojson': True,
"outtmpl": "%(id)s.%(ext)s", 'outtmpl': '%(id)s.%(ext)s',
} }
ydl = YoutubeDL(params) ydl = YoutubeDL(params)
ydl.add_default_info_extractors() ydl.add_default_info_extractors()

46
test/test_compat.py Normal file
View File

@@ -0,0 +1,46 @@
#!/usr/bin/env python
# coding: utf-8
from __future__ import unicode_literals
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.utils import get_filesystem_encoding
from youtube_dl.compat import (
compat_getenv,
compat_expanduser,
)
class TestCompat(unittest.TestCase):
def test_compat_getenv(self):
test_str = 'тест'
os.environ['YOUTUBE-DL-TEST'] = (
test_str if sys.version_info >= (3, 0)
else test_str.encode(get_filesystem_encoding()))
self.assertEqual(compat_getenv('YOUTUBE-DL-TEST'), test_str)
def test_compat_expanduser(self):
old_home = os.environ.get('HOME')
test_str = 'C:\Documents and Settings\тест\Application Data'
os.environ['HOME'] = (
test_str if sys.version_info >= (3, 0)
else test_str.encode(get_filesystem_encoding()))
self.assertEqual(compat_expanduser('~'), test_str)
os.environ['HOME'] = old_home
def test_all_present(self):
import youtube_dl.compat
all_names = youtube_dl.compat.__all__
present_names = set(filter(
lambda c: '_' in c and not c.startswith('_'),
dir(youtube_dl.compat))) - set(['unicode_literals'])
self.assertEqual(all_names, sorted(present_names))
if __name__ == '__main__':
unittest.main()

View File

@@ -1,5 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@@ -8,6 +10,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import ( from test.helper import (
assertGreaterEqual, assertGreaterEqual,
expect_warnings,
get_params, get_params,
gettestcases, gettestcases,
expect_info_dict, expect_info_dict,
@@ -22,10 +25,12 @@ import json
import socket import socket
import youtube_dl.YoutubeDL import youtube_dl.YoutubeDL
from youtube_dl.utils import ( from youtube_dl.compat import (
compat_http_client, compat_http_client,
compat_urllib_error, compat_urllib_error,
compat_HTTPError, compat_HTTPError,
)
from youtube_dl.utils import (
DownloadError, DownloadError,
ExtractorError, ExtractorError,
format_bytes, format_bytes,
@@ -93,13 +98,14 @@ def generator(test_case):
params.setdefault('extract_flat', True) params.setdefault('extract_flat', True)
params.setdefault('skip_download', True) params.setdefault('skip_download', True)
ydl = YoutubeDL(params) ydl = YoutubeDL(params, auto_init=False)
ydl.add_default_info_extractors() ydl.add_default_info_extractors()
finished_hook_called = set() finished_hook_called = set()
def _hook(status): def _hook(status):
if status['status'] == 'finished': if status['status'] == 'finished':
finished_hook_called.add(status['filename']) finished_hook_called.add(status['filename'])
ydl.add_progress_hook(_hook) ydl.add_progress_hook(_hook)
expect_warnings(ydl, test_case.get('expected_warnings', []))
def get_tc_filename(tc): def get_tc_filename(tc):
return tc.get('file') or ydl.prepare_filename(tc.get('info_dict', {})) return tc.get('file') or ydl.prepare_filename(tc.get('info_dict', {}))
@@ -183,7 +189,9 @@ def generator(test_case):
md5_for_file = _file_md5(tc_filename) md5_for_file = _file_md5(tc_filename)
self.assertEqual(md5_for_file, tc['md5']) self.assertEqual(md5_for_file, tc['md5'])
info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json' info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json'
self.assertTrue(os.path.exists(info_json_fn)) self.assertTrue(
os.path.exists(info_json_fn),
'Missing info file %s' % info_json_fn)
with io.open(info_json_fn, encoding='utf-8') as infof: with io.open(info_json_fn, encoding='utf-8') as infof:
info_dict = json.load(infof) info_dict = json.load(infof)
@@ -204,9 +212,9 @@ for n, test_case in enumerate(defs):
tname = 'test_' + str(test_case['name']) tname = 'test_' + str(test_case['name'])
i = 1 i = 1
while hasattr(TestDownload, tname): while hasattr(TestDownload, tname):
tname = 'test_' + str(test_case['name']) + '_' + str(i) tname = 'test_%s_%d' % (test_case['name'], i)
i += 1 i += 1
test_method.__name__ = tname test_method.__name__ = str(tname)
setattr(TestDownload, test_method.__name__, test_method) setattr(TestDownload, test_method.__name__, test_method)
del test_method del test_method

View File

@@ -1,3 +1,6 @@
#!/usr/bin/env python
from __future__ import unicode_literals
import unittest import unittest
import sys import sys
@@ -6,17 +9,19 @@ import subprocess
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
try: try:
_DEV_NULL = subprocess.DEVNULL _DEV_NULL = subprocess.DEVNULL
except AttributeError: except AttributeError:
_DEV_NULL = open(os.devnull, 'wb') _DEV_NULL = open(os.devnull, 'wb')
class TestExecution(unittest.TestCase): class TestExecution(unittest.TestCase):
def test_import(self): def test_import(self):
subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir) subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir)
def test_module_exec(self): def test_module_exec(self):
if sys.version_info >= (2,7): # Python 2.6 doesn't support package execution if sys.version_info >= (2, 7): # Python 2.6 doesn't support package execution
subprocess.check_call([sys.executable, '-m', 'youtube_dl', '--version'], cwd=rootDir, stdout=_DEV_NULL) subprocess.check_call([sys.executable, '-m', 'youtube_dl', '--version'], cwd=rootDir, stdout=_DEV_NULL)
def test_main_exec(self): def test_main_exec(self):

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -74,7 +75,7 @@ class TestYoutubeSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '3cb210999d3e021bd6c7f0ea751eab06') self.assertEqual(md5(subtitles['en']), '3cb210999d3e021bd6c7f0ea751eab06')
def test_youtube_list_subtitles(self): def test_youtube_list_subtitles(self):
self.DL.expect_warning(u'Video doesn\'t have automatic captions') self.DL.expect_warning('Video doesn\'t have automatic captions')
self.DL.params['listsubtitles'] = True self.DL.params['listsubtitles'] = True
info_dict = self.getInfoDict() info_dict = self.getInfoDict()
self.assertEqual(info_dict, None) self.assertEqual(info_dict, None)
@@ -87,7 +88,7 @@ class TestYoutubeSubtitles(BaseTestSubtitles):
self.assertTrue(subtitles['it'] is not None) self.assertTrue(subtitles['it'] is not None)
def test_youtube_nosubtitles(self): def test_youtube_nosubtitles(self):
self.DL.expect_warning(u'video doesn\'t have subtitles') self.DL.expect_warning('video doesn\'t have subtitles')
self.url = 'n5BB19UTcdA' self.url = 'n5BB19UTcdA'
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
@@ -101,7 +102,7 @@ class TestYoutubeSubtitles(BaseTestSubtitles):
self.DL.params['subtitleslangs'] = langs self.DL.params['subtitleslangs'] = langs
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
for lang in langs: for lang in langs:
self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang) self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
class TestDailymotionSubtitles(BaseTestSubtitles): class TestDailymotionSubtitles(BaseTestSubtitles):
@@ -130,20 +131,20 @@ class TestDailymotionSubtitles(BaseTestSubtitles):
self.assertEqual(len(subtitles.keys()), 5) self.assertEqual(len(subtitles.keys()), 5)
def test_list_subtitles(self): def test_list_subtitles(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['listsubtitles'] = True self.DL.params['listsubtitles'] = True
info_dict = self.getInfoDict() info_dict = self.getInfoDict()
self.assertEqual(info_dict, None) self.assertEqual(info_dict, None)
def test_automatic_captions(self): def test_automatic_captions(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['writeautomaticsub'] = True self.DL.params['writeautomaticsub'] = True
self.DL.params['subtitleslang'] = ['en'] self.DL.params['subtitleslang'] = ['en']
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertTrue(len(subtitles.keys()) == 0) self.assertTrue(len(subtitles.keys()) == 0)
def test_nosubtitles(self): def test_nosubtitles(self):
self.DL.expect_warning(u'video doesn\'t have subtitles') self.DL.expect_warning('video doesn\'t have subtitles')
self.url = 'http://www.dailymotion.com/video/x12u166_le-zapping-tele-star-du-08-aout-2013_tv' self.url = 'http://www.dailymotion.com/video/x12u166_le-zapping-tele-star-du-08-aout-2013_tv'
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
@@ -156,7 +157,7 @@ class TestDailymotionSubtitles(BaseTestSubtitles):
self.DL.params['subtitleslangs'] = langs self.DL.params['subtitleslangs'] = langs
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
for lang in langs: for lang in langs:
self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang) self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
class TestTedSubtitles(BaseTestSubtitles): class TestTedSubtitles(BaseTestSubtitles):
@@ -185,13 +186,13 @@ class TestTedSubtitles(BaseTestSubtitles):
self.assertTrue(len(subtitles.keys()) >= 28) self.assertTrue(len(subtitles.keys()) >= 28)
def test_list_subtitles(self): def test_list_subtitles(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['listsubtitles'] = True self.DL.params['listsubtitles'] = True
info_dict = self.getInfoDict() info_dict = self.getInfoDict()
self.assertEqual(info_dict, None) self.assertEqual(info_dict, None)
def test_automatic_captions(self): def test_automatic_captions(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['writeautomaticsub'] = True self.DL.params['writeautomaticsub'] = True
self.DL.params['subtitleslang'] = ['en'] self.DL.params['subtitleslang'] = ['en']
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
@@ -203,7 +204,7 @@ class TestTedSubtitles(BaseTestSubtitles):
self.DL.params['subtitleslangs'] = langs self.DL.params['subtitleslangs'] = langs
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
for lang in langs: for lang in langs:
self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang) self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
class TestBlipTVSubtitles(BaseTestSubtitles): class TestBlipTVSubtitles(BaseTestSubtitles):
@@ -211,13 +212,13 @@ class TestBlipTVSubtitles(BaseTestSubtitles):
IE = BlipTVIE IE = BlipTVIE
def test_list_subtitles(self): def test_list_subtitles(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['listsubtitles'] = True self.DL.params['listsubtitles'] = True
info_dict = self.getInfoDict() info_dict = self.getInfoDict()
self.assertEqual(info_dict, None) self.assertEqual(info_dict, None)
def test_allsubtitles(self): def test_allsubtitles(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
@@ -251,20 +252,20 @@ class TestVimeoSubtitles(BaseTestSubtitles):
self.assertEqual(set(subtitles.keys()), set(['de', 'en', 'es', 'fr'])) self.assertEqual(set(subtitles.keys()), set(['de', 'en', 'es', 'fr']))
def test_list_subtitles(self): def test_list_subtitles(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['listsubtitles'] = True self.DL.params['listsubtitles'] = True
info_dict = self.getInfoDict() info_dict = self.getInfoDict()
self.assertEqual(info_dict, None) self.assertEqual(info_dict, None)
def test_automatic_captions(self): def test_automatic_captions(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['writeautomaticsub'] = True self.DL.params['writeautomaticsub'] = True
self.DL.params['subtitleslang'] = ['en'] self.DL.params['subtitleslang'] = ['en']
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertTrue(len(subtitles.keys()) == 0) self.assertTrue(len(subtitles.keys()) == 0)
def test_nosubtitles(self): def test_nosubtitles(self):
self.DL.expect_warning(u'video doesn\'t have subtitles') self.DL.expect_warning('video doesn\'t have subtitles')
self.url = 'http://vimeo.com/56015672' self.url = 'http://vimeo.com/56015672'
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
@@ -277,7 +278,7 @@ class TestVimeoSubtitles(BaseTestSubtitles):
self.DL.params['subtitleslangs'] = langs self.DL.params['subtitleslangs'] = langs
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
for lang in langs: for lang in langs:
self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang) self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
class TestWallaSubtitles(BaseTestSubtitles): class TestWallaSubtitles(BaseTestSubtitles):
@@ -285,13 +286,13 @@ class TestWallaSubtitles(BaseTestSubtitles):
IE = WallaIE IE = WallaIE
def test_list_subtitles(self): def test_list_subtitles(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['listsubtitles'] = True self.DL.params['listsubtitles'] = True
info_dict = self.getInfoDict() info_dict = self.getInfoDict()
self.assertEqual(info_dict, None) self.assertEqual(info_dict, None)
def test_allsubtitles(self): def test_allsubtitles(self):
self.DL.expect_warning(u'Automatic Captions not supported by this server') self.DL.expect_warning('Automatic Captions not supported by this server')
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
@@ -299,7 +300,7 @@ class TestWallaSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['heb']), 'e758c5d7cb982f6bef14f377ec7a3920') self.assertEqual(md5(subtitles['heb']), 'e758c5d7cb982f6bef14f377ec7a3920')
def test_nosubtitles(self): def test_nosubtitles(self):
self.DL.expect_warning(u'video doesn\'t have subtitles') self.DL.expect_warning('video doesn\'t have subtitles')
self.url = 'http://vod.walla.co.il/movie/2642630/one-direction-all-for-one' self.url = 'http://vod.walla.co.il/movie/2642630/one-direction-all-for-one'
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -37,7 +38,9 @@ def _make_testfunc(testfile):
or os.path.getmtime(swf_file) < os.path.getmtime(as_file)): or os.path.getmtime(swf_file) < os.path.getmtime(as_file)):
# Recompile # Recompile
try: try:
subprocess.check_call(['mxmlc', '-output', swf_file, as_file]) subprocess.check_call([
'mxmlc', '-output', swf_file,
'-static-link-runtime-shared-libraries', as_file])
except OSError as ose: except OSError as ose:
if ose.errno == errno.ENOENT: if ose.errno == errno.ENOENT:
print('mxmlc not found! Skipping test.') print('mxmlc not found! Skipping test.')

View File

@@ -16,11 +16,11 @@ import json
import xml.etree.ElementTree import xml.etree.ElementTree
from youtube_dl.utils import ( from youtube_dl.utils import (
clean_html,
DateRange, DateRange,
encodeFilename, encodeFilename,
find_xpath_attr, find_xpath_attr,
fix_xml_ampersands, fix_xml_ampersands,
get_meta_content,
orderedSet, orderedSet,
OnDemandPagedList, OnDemandPagedList,
InAdvancePagedList, InAdvancePagedList,
@@ -46,8 +46,7 @@ from youtube_dl.utils import (
escape_url, escape_url,
js_to_json, js_to_json,
get_filesystem_encoding, get_filesystem_encoding,
compat_getenv, intlist_to_bytes,
compat_expanduser,
) )
@@ -157,17 +156,6 @@ class TestUtil(unittest.TestCase):
self.assertEqual(find_xpath_attr(doc, './/node', 'x', 'a'), doc[1]) self.assertEqual(find_xpath_attr(doc, './/node', 'x', 'a'), doc[1])
self.assertEqual(find_xpath_attr(doc, './/node', 'y', 'c'), doc[2]) self.assertEqual(find_xpath_attr(doc, './/node', 'y', 'c'), doc[2])
def test_meta_parser(self):
testhtml = '''
<head>
<meta name="description" content="foo &amp; bar">
<meta content='Plato' name='author'/>
</head>
'''
get_meta = lambda name: get_meta_content(name, testhtml)
self.assertEqual(get_meta('description'), 'foo & bar')
self.assertEqual(get_meta('author'), 'Plato')
def test_xpath_with_ns(self): def test_xpath_with_ns(self):
testxml = '''<root xmlns:media="http://example.com/"> testxml = '''<root xmlns:media="http://example.com/">
<media:song> <media:song>
@@ -230,6 +218,7 @@ class TestUtil(unittest.TestCase):
self.assertEqual(parse_duration('0m0s'), 0) self.assertEqual(parse_duration('0m0s'), 0)
self.assertEqual(parse_duration('0s'), 0) self.assertEqual(parse_duration('0s'), 0)
self.assertEqual(parse_duration('01:02:03.05'), 3723.05) self.assertEqual(parse_duration('01:02:03.05'), 3723.05)
self.assertEqual(parse_duration('T30M38S'), 1838)
def test_fix_xml_ampersands(self): def test_fix_xml_ampersands(self):
self.assertEqual( self.assertEqual(
@@ -289,12 +278,17 @@ class TestUtil(unittest.TestCase):
self.assertEqual(parse_iso8601('2014-03-23T23:04:26+0100'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T23:04:26+0100'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26+0000'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T22:04:26+0000'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26Z'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T22:04:26Z'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26.1234Z'), 1395612266)
def test_strip_jsonp(self): def test_strip_jsonp(self):
stripped = strip_jsonp('cb ([ {"id":"532cb",\n\n\n"x":\n3}\n]\n);') stripped = strip_jsonp('cb ([ {"id":"532cb",\n\n\n"x":\n3}\n]\n);')
d = json.loads(stripped) d = json.loads(stripped)
self.assertEqual(d, [{"id": "532cb", "x": 3}]) self.assertEqual(d, [{"id": "532cb", "x": 3}])
stripped = strip_jsonp('parseMetadata({"STATUS":"OK"})\n\n\n//epc')
d = json.loads(stripped)
self.assertEqual(d, {'STATUS': 'OK'})
def test_uppercase_escape(self): def test_uppercase_escape(self):
self.assertEqual(uppercase_escape(''), '') self.assertEqual(uppercase_escape(''), '')
self.assertEqual(uppercase_escape('\\U0001d550'), '𝕐') self.assertEqual(uppercase_escape('\\U0001d550'), '𝕐')
@@ -358,15 +352,14 @@ class TestUtil(unittest.TestCase):
on = js_to_json('{"abc": true}') on = js_to_json('{"abc": true}')
self.assertEqual(json.loads(on), {'abc': True}) self.assertEqual(json.loads(on), {'abc': True})
def test_compat_getenv(self): def test_clean_html(self):
test_str = 'тест' self.assertEqual(clean_html('a:\nb'), 'a: b')
os.environ['YOUTUBE-DL-TEST'] = test_str.encode(get_filesystem_encoding()) self.assertEqual(clean_html('a:\n "b"'), 'a: "b"')
self.assertEqual(compat_getenv('YOUTUBE-DL-TEST'), test_str)
def test_compat_expanduser(self): def test_intlist_to_bytes(self):
test_str = 'C:\Documents and Settings\тест\Application Data' self.assertEqual(
os.environ['HOME'] = test_str.encode(get_filesystem_encoding()) intlist_to_bytes([0, 1, 127, 128, 255]),
self.assertEqual(compat_expanduser('~'), test_str) b'\x00\x01\x7f\x80\xff')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -14,7 +14,7 @@ import re
import string import string
from youtube_dl.extractor import YoutubeIE from youtube_dl.extractor import YoutubeIE
from youtube_dl.utils import compat_str, compat_urlretrieve from youtube_dl.compat import compat_str, compat_urlretrieve
_TESTS = [ _TESTS = [
( (

View File

@@ -22,13 +22,15 @@ import traceback
if os.name == 'nt': if os.name == 'nt':
import ctypes import ctypes
from .utils import ( from .compat import (
compat_cookiejar, compat_cookiejar,
compat_expanduser, compat_expanduser,
compat_http_client, compat_http_client,
compat_str, compat_str,
compat_urllib_error, compat_urllib_error,
compat_urllib_request, compat_urllib_request,
)
from .utils import (
escape_url, escape_url,
ContentTooShortError, ContentTooShortError,
date_from_str, date_from_str,
@@ -62,6 +64,7 @@ from .utils import (
from .cache import Cache from .cache import Cache
from .extractor import get_info_extractor, gen_extractors from .extractor import get_info_extractor, gen_extractors
from .downloader import get_suitable_downloader from .downloader import get_suitable_downloader
from .downloader.rtmp import rtmpdump_version
from .postprocessor import FFmpegMergerPP, FFmpegPostProcessor from .postprocessor import FFmpegMergerPP, FFmpegPostProcessor
from .version import __version__ from .version import __version__
@@ -189,7 +192,7 @@ class YoutubeDL(object):
_num_downloads = None _num_downloads = None
_screen_file = None _screen_file = None
def __init__(self, params=None): def __init__(self, params=None, auto_init=True):
"""Create a FileDownloader object with the given options.""" """Create a FileDownloader object with the given options."""
if params is None: if params is None:
params = {} params = {}
@@ -246,6 +249,10 @@ class YoutubeDL(object):
self._setup_opener() self._setup_opener()
if auto_init:
self.print_debug_header()
self.add_default_info_extractors()
def add_info_extractor(self, ie): def add_info_extractor(self, ie):
"""Add an InfoExtractor object to the end of the list.""" """Add an InfoExtractor object to the end of the list."""
self._ies.append(ie) self._ies.append(ie)
@@ -651,6 +658,8 @@ class YoutubeDL(object):
extra = { extra = {
'n_entries': n_entries, 'n_entries': n_entries,
'playlist': playlist, 'playlist': playlist,
'playlist_id': ie_result.get('id'),
'playlist_title': ie_result.get('title'),
'playlist_index': i + playliststart, 'playlist_index': i + playliststart,
'extractor': ie_result['extractor'], 'extractor': ie_result['extractor'],
'webpage_url': ie_result['webpage_url'], 'webpage_url': ie_result['webpage_url'],
@@ -829,6 +838,13 @@ class YoutubeDL(object):
formats_info = (self.select_format(format_1, formats), formats_info = (self.select_format(format_1, formats),
self.select_format(format_2, formats)) self.select_format(format_2, formats))
if all(formats_info): if all(formats_info):
# The first format must contain the video and the
# second the audio
if formats_info[0].get('vcodec') == 'none':
self.report_error('The first format must '
'contain the video, try using '
'"-f %s+%s"' % (format_2, format_1))
return
selected_format = { selected_format = {
'requested_formats': formats_info, 'requested_formats': formats_info,
'format': rf, 'format': rf,
@@ -985,7 +1001,7 @@ class YoutubeDL(object):
else: else:
self.to_screen('[info] Writing video description metadata as JSON to: ' + infofn) self.to_screen('[info] Writing video description metadata as JSON to: ' + infofn)
try: try:
write_json_file(info_dict, encodeFilename(infofn)) write_json_file(info_dict, infofn)
except (OSError, IOError): except (OSError, IOError):
self.report_error('Cannot write metadata to JSON file ' + infofn) self.report_error('Cannot write metadata to JSON file ' + infofn)
return return
@@ -1207,6 +1223,8 @@ class YoutubeDL(object):
res += 'video@' res += 'video@'
if fdict.get('vbr') is not None: if fdict.get('vbr') is not None:
res += '%4dk' % fdict['vbr'] res += '%4dk' % fdict['vbr']
if fdict.get('fps') is not None:
res += ', %sfps' % fdict['fps']
if fdict.get('acodec') is not None: if fdict.get('acodec') is not None:
if res: if res:
res += ', ' res += ', '
@@ -1288,11 +1306,13 @@ class YoutubeDL(object):
self.report_warning( self.report_warning(
'Your Python is broken! Update to a newer and supported version') 'Your Python is broken! Update to a newer and supported version')
stdout_encoding = getattr(
sys.stdout, 'encoding', 'missing (%s)' % type(sys.stdout).__name__)
encoding_str = ( encoding_str = (
'[debug] Encodings: locale %s, fs %s, out %s, pref %s\n' % ( '[debug] Encodings: locale %s, fs %s, out %s, pref %s\n' % (
locale.getpreferredencoding(), locale.getpreferredencoding(),
sys.getfilesystemencoding(), sys.getfilesystemencoding(),
sys.stdout.encoding, stdout_encoding,
self.get_encoding())) self.get_encoding()))
write_string(encoding_str, encoding=None) write_string(encoding_str, encoding=None)
@@ -1315,6 +1335,7 @@ class YoutubeDL(object):
platform.python_version(), platform_name())) platform.python_version(), platform_name()))
exe_versions = FFmpegPostProcessor.get_versions() exe_versions = FFmpegPostProcessor.get_versions()
exe_versions['rtmpdump'] = rtmpdump_version()
exe_str = ', '.join( exe_str = ', '.join(
'%s %s' % (exe, v) '%s %s' % (exe, v)
for exe, v in sorted(exe_versions.items()) for exe, v in sorted(exe_versions.items())

View File

@@ -1,90 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
__authors__ = ( from __future__ import unicode_literals
'Ricardo Garcia Gonzalez',
'Danny Colligan',
'Benjamin Johnson',
'Vasyl\' Vavrychuk',
'Witold Baryluk',
'Paweł Paprota',
'Gergely Imreh',
'Rogério Brito',
'Philipp Hagemeister',
'Sören Schulze',
'Kevin Ngo',
'Ori Avtalion',
'shizeeg',
'Filippo Valsorda',
'Christian Albrecht',
'Dave Vasilevsky',
'Jaime Marquínez Ferrándiz',
'Jeff Crouse',
'Osama Khalid',
'Michael Walter',
'M. Yasoob Ullah Khalid',
'Julien Fraichard',
'Johny Mo Swag',
'Axel Noack',
'Albert Kim',
'Pierre Rudloff',
'Huarong Huo',
'Ismael Mejía',
'Steffan \'Ruirize\' James',
'Andras Elso',
'Jelle van der Waa',
'Marcin Cieślak',
'Anton Larionov',
'Takuya Tsuchida',
'Sergey M.',
'Michael Orlitzky',
'Chris Gahan',
'Saimadhav Heblikar',
'Mike Col',
'Oleg Prutz',
'pulpe',
'Andreas Schmitz',
'Michael Kaiser',
'Niklas Laxström',
'David Triendl',
'Anthony Weems',
'David Wagner',
'Juan C. Olivares',
'Mattias Harrysson',
'phaer',
'Sainyam Kapoor',
'Nicolas Évrard',
'Jason Normore',
'Hoje Lee',
'Adam Thalhammer',
'Georg Jähnig',
'Ralf Haring',
'Koki Takahashi',
'Ariset Llerena',
'Adam Malcontenti-Wilson',
'Tobias Bell',
'Naglis Jonaitis',
'Charles Chen',
'Hassaan Ali',
'Dobrosław Żybort',
'David Fabijan',
'Sebastian Haas',
'Alexander Kirk',
'Erik Johnson',
'Keith Beckman',
'Ole Ernst',
'Aaron McDaniel (mcd1992)',
'Magnus Kolstad',
'Hari Padmanaban',
'Carlos Ramos',
'5moufl',
'lenaten',
'Dennis Scheiba',
'Damon Timm',
'winwon',
'Xavier Beynon',
'Gabriel Schubiner',
)
__license__ = 'Public Domain' __license__ = 'Public Domain'
@@ -98,10 +15,13 @@ import sys
from .options import ( from .options import (
parseOpts, parseOpts,
) )
from .utils import ( from .compat import (
compat_expanduser, compat_expanduser,
compat_getpass, compat_getpass,
compat_print, compat_print,
workaround_optparse_bug9161,
)
from .utils import (
DateRange, DateRange,
DEFAULT_OUTTMPL, DEFAULT_OUTTMPL,
decodeOption, decodeOption,
@@ -138,7 +58,9 @@ def _real_main(argv=None):
# https://github.com/rg3/youtube-dl/issues/820 # https://github.com/rg3/youtube-dl/issues/820
codecs.register(lambda name: codecs.lookup('utf-8') if name == 'cp65001' else None) codecs.register(lambda name: codecs.lookup('utf-8') if name == 'cp65001' else None)
setproctitle(u'youtube-dl') workaround_optparse_bug9161()
setproctitle('youtube-dl')
parser, opts, args = parseOpts(argv) parser, opts, args = parseOpts(argv)
@@ -154,10 +76,10 @@ def _real_main(argv=None):
if opts.headers is not None: if opts.headers is not None:
for h in opts.headers: for h in opts.headers:
if h.find(':', 1) < 0: if h.find(':', 1) < 0:
parser.error(u'wrong header formatting, it should be key:value, not "%s"'%h) parser.error('wrong header formatting, it should be key:value, not "%s"'%h)
key, value = h.split(':', 2) key, value = h.split(':', 2)
if opts.verbose: if opts.verbose:
write_string(u'[debug] Adding header from command line option %s:%s\n'%(key, value)) write_string('[debug] Adding header from command line option %s:%s\n'%(key, value))
std_headers[key] = value std_headers[key] = value
# Dump user agent # Dump user agent
@@ -175,9 +97,9 @@ def _real_main(argv=None):
batchfd = io.open(opts.batchfile, 'r', encoding='utf-8', errors='ignore') batchfd = io.open(opts.batchfile, 'r', encoding='utf-8', errors='ignore')
batch_urls = read_batch_urls(batchfd) batch_urls = read_batch_urls(batchfd)
if opts.verbose: if opts.verbose:
write_string(u'[debug] Batch file urls: ' + repr(batch_urls) + u'\n') write_string('[debug] Batch file urls: ' + repr(batch_urls) + '\n')
except IOError: except IOError:
sys.exit(u'ERROR: batch file could not be read') sys.exit('ERROR: batch file could not be read')
all_urls = batch_urls + args all_urls = batch_urls + args
all_urls = [url.strip() for url in all_urls] all_urls = [url.strip() for url in all_urls]
_enc = preferredencoding() _enc = preferredencoding()
@@ -190,7 +112,7 @@ def _real_main(argv=None):
compat_print(ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie._WORKING else '')) compat_print(ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie._WORKING else ''))
matchedUrls = [url for url in all_urls if ie.suitable(url)] matchedUrls = [url for url in all_urls if ie.suitable(url)]
for mu in matchedUrls: for mu in matchedUrls:
compat_print(u' ' + mu) compat_print(' ' + mu)
sys.exit(0) sys.exit(0)
if opts.list_extractor_descriptions: if opts.list_extractor_descriptions:
for ie in sorted(extractors, key=lambda ie: ie.IE_NAME.lower()): for ie in sorted(extractors, key=lambda ie: ie.IE_NAME.lower()):
@@ -200,63 +122,63 @@ def _real_main(argv=None):
if desc is False: if desc is False:
continue continue
if hasattr(ie, 'SEARCH_KEY'): if hasattr(ie, 'SEARCH_KEY'):
_SEARCHES = (u'cute kittens', u'slithering pythons', u'falling cat', u'angry poodle', u'purple fish', u'running tortoise', u'sleeping bunny') _SEARCHES = ('cute kittens', 'slithering pythons', 'falling cat', 'angry poodle', 'purple fish', 'running tortoise', 'sleeping bunny')
_COUNTS = (u'', u'5', u'10', u'all') _COUNTS = ('', '5', '10', 'all')
desc += u' (Example: "%s%s:%s" )' % (ie.SEARCH_KEY, random.choice(_COUNTS), random.choice(_SEARCHES)) desc += ' (Example: "%s%s:%s" )' % (ie.SEARCH_KEY, random.choice(_COUNTS), random.choice(_SEARCHES))
compat_print(desc) compat_print(desc)
sys.exit(0) sys.exit(0)
# Conflicting, missing and erroneous options # Conflicting, missing and erroneous options
if opts.usenetrc and (opts.username is not None or opts.password is not None): if opts.usenetrc and (opts.username is not None or opts.password is not None):
parser.error(u'using .netrc conflicts with giving username/password') parser.error('using .netrc conflicts with giving username/password')
if opts.password is not None and opts.username is None: if opts.password is not None and opts.username is None:
parser.error(u'account username missing\n') parser.error('account username missing\n')
if opts.outtmpl is not None and (opts.usetitle or opts.autonumber or opts.useid): if opts.outtmpl is not None and (opts.usetitle or opts.autonumber or opts.useid):
parser.error(u'using output template conflicts with using title, video ID or auto number') parser.error('using output template conflicts with using title, video ID or auto number')
if opts.usetitle and opts.useid: if opts.usetitle and opts.useid:
parser.error(u'using title conflicts with using video ID') parser.error('using title conflicts with using video ID')
if opts.username is not None and opts.password is None: if opts.username is not None and opts.password is None:
opts.password = compat_getpass(u'Type account password and press [Return]: ') opts.password = compat_getpass('Type account password and press [Return]: ')
if opts.ratelimit is not None: if opts.ratelimit is not None:
numeric_limit = FileDownloader.parse_bytes(opts.ratelimit) numeric_limit = FileDownloader.parse_bytes(opts.ratelimit)
if numeric_limit is None: if numeric_limit is None:
parser.error(u'invalid rate limit specified') parser.error('invalid rate limit specified')
opts.ratelimit = numeric_limit opts.ratelimit = numeric_limit
if opts.min_filesize is not None: if opts.min_filesize is not None:
numeric_limit = FileDownloader.parse_bytes(opts.min_filesize) numeric_limit = FileDownloader.parse_bytes(opts.min_filesize)
if numeric_limit is None: if numeric_limit is None:
parser.error(u'invalid min_filesize specified') parser.error('invalid min_filesize specified')
opts.min_filesize = numeric_limit opts.min_filesize = numeric_limit
if opts.max_filesize is not None: if opts.max_filesize is not None:
numeric_limit = FileDownloader.parse_bytes(opts.max_filesize) numeric_limit = FileDownloader.parse_bytes(opts.max_filesize)
if numeric_limit is None: if numeric_limit is None:
parser.error(u'invalid max_filesize specified') parser.error('invalid max_filesize specified')
opts.max_filesize = numeric_limit opts.max_filesize = numeric_limit
if opts.retries is not None: if opts.retries is not None:
try: try:
opts.retries = int(opts.retries) opts.retries = int(opts.retries)
except (TypeError, ValueError): except (TypeError, ValueError):
parser.error(u'invalid retry count specified') parser.error('invalid retry count specified')
if opts.buffersize is not None: if opts.buffersize is not None:
numeric_buffersize = FileDownloader.parse_bytes(opts.buffersize) numeric_buffersize = FileDownloader.parse_bytes(opts.buffersize)
if numeric_buffersize is None: if numeric_buffersize is None:
parser.error(u'invalid buffer size specified') parser.error('invalid buffer size specified')
opts.buffersize = numeric_buffersize opts.buffersize = numeric_buffersize
if opts.playliststart <= 0: if opts.playliststart <= 0:
raise ValueError(u'Playlist start must be positive') raise ValueError('Playlist start must be positive')
if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart: if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart:
raise ValueError(u'Playlist end must be greater than playlist start') raise ValueError('Playlist end must be greater than playlist start')
if opts.extractaudio: if opts.extractaudio:
if opts.audioformat not in ['best', 'aac', 'mp3', 'm4a', 'opus', 'vorbis', 'wav']: if opts.audioformat not in ['best', 'aac', 'mp3', 'm4a', 'opus', 'vorbis', 'wav']:
parser.error(u'invalid audio format specified') parser.error('invalid audio format specified')
if opts.audioquality: if opts.audioquality:
opts.audioquality = opts.audioquality.strip('k').strip('K') opts.audioquality = opts.audioquality.strip('k').strip('K')
if not opts.audioquality.isdigit(): if not opts.audioquality.isdigit():
parser.error(u'invalid audio quality specified') parser.error('invalid audio quality specified')
if opts.recodevideo is not None: if opts.recodevideo is not None:
if opts.recodevideo not in ['mp4', 'flv', 'webm', 'ogg', 'mkv']: if opts.recodevideo not in ['mp4', 'flv', 'webm', 'ogg', 'mkv']:
parser.error(u'invalid video recode format specified') parser.error('invalid video recode format specified')
if opts.date is not None: if opts.date is not None:
date = DateRange.day(opts.date) date = DateRange.day(opts.date)
else: else:
@@ -276,17 +198,17 @@ def _real_main(argv=None):
if opts.outtmpl is not None: if opts.outtmpl is not None:
opts.outtmpl = opts.outtmpl.decode(preferredencoding()) opts.outtmpl = opts.outtmpl.decode(preferredencoding())
outtmpl =((opts.outtmpl is not None and opts.outtmpl) outtmpl =((opts.outtmpl is not None and opts.outtmpl)
or (opts.format == '-1' and opts.usetitle and u'%(title)s-%(id)s-%(format)s.%(ext)s') or (opts.format == '-1' and opts.usetitle and '%(title)s-%(id)s-%(format)s.%(ext)s')
or (opts.format == '-1' and u'%(id)s-%(format)s.%(ext)s') or (opts.format == '-1' and '%(id)s-%(format)s.%(ext)s')
or (opts.usetitle and opts.autonumber and u'%(autonumber)s-%(title)s-%(id)s.%(ext)s') or (opts.usetitle and opts.autonumber and '%(autonumber)s-%(title)s-%(id)s.%(ext)s')
or (opts.usetitle and u'%(title)s-%(id)s.%(ext)s') or (opts.usetitle and '%(title)s-%(id)s.%(ext)s')
or (opts.useid and u'%(id)s.%(ext)s') or (opts.useid and '%(id)s.%(ext)s')
or (opts.autonumber and u'%(autonumber)s-%(id)s.%(ext)s') or (opts.autonumber and '%(autonumber)s-%(id)s.%(ext)s')
or DEFAULT_OUTTMPL) or DEFAULT_OUTTMPL)
if not os.path.splitext(outtmpl)[1] and opts.extractaudio: if not os.path.splitext(outtmpl)[1] and opts.extractaudio:
parser.error(u'Cannot download a video and extract audio into the same' parser.error('Cannot download a video and extract audio into the same'
u' file! Use "{0}.%(ext)s" instead of "{0}" as the output' ' file! Use "{0}.%(ext)s" instead of "{0}" as the output'
u' template'.format(outtmpl)) ' template'.format(outtmpl))
any_printing = opts.geturl or opts.gettitle or opts.getid or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat or opts.getduration or opts.dumpjson or opts.dump_single_json any_printing = opts.geturl or opts.gettitle or opts.getid or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat or opts.getduration or opts.dumpjson or opts.dump_single_json
download_archive_fn = compat_expanduser(opts.download_archive) if opts.download_archive is not None else opts.download_archive download_archive_fn = compat_expanduser(opts.download_archive) if opts.download_archive is not None else opts.download_archive
@@ -378,9 +300,6 @@ def _real_main(argv=None):
} }
with YoutubeDL(ydl_opts) as ydl: with YoutubeDL(ydl_opts) as ydl:
ydl.print_debug_header()
ydl.add_default_info_extractors()
# PostProcessors # PostProcessors
# Add the metadata pp first, the other pps will copy it # Add the metadata pp first, the other pps will copy it
if opts.addmetadata: if opts.addmetadata:
@@ -416,7 +335,7 @@ def _real_main(argv=None):
# Maybe do nothing # Maybe do nothing
if (len(all_urls) < 1) and (opts.load_info_filename is None): if (len(all_urls) < 1) and (opts.load_info_filename is None):
if not (opts.update_self or opts.rm_cachedir): if not (opts.update_self or opts.rm_cachedir):
parser.error(u'you must provide at least one URL') parser.error('you must provide at least one URL')
else: else:
sys.exit() sys.exit()
@@ -426,7 +345,7 @@ def _real_main(argv=None):
else: else:
retcode = ydl.download(all_urls) retcode = ydl.download(all_urls)
except MaxDownloadsReached: except MaxDownloadsReached:
ydl.to_screen(u'--max-download limit reached, aborting.') ydl.to_screen('--max-download limit reached, aborting.')
retcode = 101 retcode = 101
sys.exit(retcode) sys.exit(retcode)
@@ -438,6 +357,6 @@ def main(argv=None):
except DownloadError: except DownloadError:
sys.exit(1) sys.exit(1)
except SameFileError: except SameFileError:
sys.exit(u'ERROR: fixed output name but more than one file to download') sys.exit('ERROR: fixed output name but more than one file to download')
except KeyboardInterrupt: except KeyboardInterrupt:
sys.exit(u'\nERROR: Interrupted by user') sys.exit('\nERROR: Interrupted by user')

View File

@@ -8,10 +8,8 @@ import re
import shutil import shutil
import traceback import traceback
from .utils import ( from .compat import compat_expanduser, compat_getenv
compat_expanduser, from .utils import write_json_file
write_json_file,
)
class Cache(object): class Cache(object):
@@ -21,7 +19,7 @@ class Cache(object):
def _get_root_dir(self): def _get_root_dir(self):
res = self._ydl.params.get('cachedir') res = self._ydl.params.get('cachedir')
if res is None: if res is None:
cache_root = os.environ.get('XDG_CACHE_HOME', '~/.cache') cache_root = compat_getenv('XDG_CACHE_HOME', '~/.cache')
res = os.path.join(cache_root, 'youtube-dl') res = os.path.join(cache_root, 'youtube-dl')
return compat_expanduser(res) return compat_expanduser(res)

350
youtube_dl/compat.py Normal file
View File

@@ -0,0 +1,350 @@
from __future__ import unicode_literals
import getpass
import optparse
import os
import subprocess
import sys
try:
import urllib.request as compat_urllib_request
except ImportError: # Python 2
import urllib2 as compat_urllib_request
try:
import urllib.error as compat_urllib_error
except ImportError: # Python 2
import urllib2 as compat_urllib_error
try:
import urllib.parse as compat_urllib_parse
except ImportError: # Python 2
import urllib as compat_urllib_parse
try:
from urllib.parse import urlparse as compat_urllib_parse_urlparse
except ImportError: # Python 2
from urlparse import urlparse as compat_urllib_parse_urlparse
try:
import urllib.parse as compat_urlparse
except ImportError: # Python 2
import urlparse as compat_urlparse
try:
import http.cookiejar as compat_cookiejar
except ImportError: # Python 2
import cookielib as compat_cookiejar
try:
import html.entities as compat_html_entities
except ImportError: # Python 2
import htmlentitydefs as compat_html_entities
try:
import html.parser as compat_html_parser
except ImportError: # Python 2
import HTMLParser as compat_html_parser
try:
import http.client as compat_http_client
except ImportError: # Python 2
import httplib as compat_http_client
try:
from urllib.error import HTTPError as compat_HTTPError
except ImportError: # Python 2
from urllib2 import HTTPError as compat_HTTPError
try:
from urllib.request import urlretrieve as compat_urlretrieve
except ImportError: # Python 2
from urllib import urlretrieve as compat_urlretrieve
try:
from subprocess import DEVNULL
compat_subprocess_get_DEVNULL = lambda: DEVNULL
except ImportError:
compat_subprocess_get_DEVNULL = lambda: open(os.path.devnull, 'w')
try:
from urllib.parse import unquote as compat_urllib_parse_unquote
except ImportError:
def compat_urllib_parse_unquote(string, encoding='utf-8', errors='replace'):
if string == '':
return string
res = string.split('%')
if len(res) == 1:
return string
if encoding is None:
encoding = 'utf-8'
if errors is None:
errors = 'replace'
# pct_sequence: contiguous sequence of percent-encoded bytes, decoded
pct_sequence = b''
string = res[0]
for item in res[1:]:
try:
if not item:
raise ValueError
pct_sequence += item[:2].decode('hex')
rest = item[2:]
if not rest:
# This segment was just a single percent-encoded character.
# May be part of a sequence of code units, so delay decoding.
# (Stored in pct_sequence).
continue
except ValueError:
rest = '%' + item
# Encountered non-percent-encoded characters. Flush the current
# pct_sequence.
string += pct_sequence.decode(encoding, errors) + rest
pct_sequence = b''
if pct_sequence:
# Flush the final pct_sequence
string += pct_sequence.decode(encoding, errors)
return string
try:
from urllib.parse import parse_qs as compat_parse_qs
except ImportError: # Python 2
# HACK: The following is the correct parse_qs implementation from cpython 3's stdlib.
# Python 2's version is apparently totally broken
def _parse_qsl(qs, keep_blank_values=False, strict_parsing=False,
encoding='utf-8', errors='replace'):
qs, _coerce_result = qs, unicode
pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')]
r = []
for name_value in pairs:
if not name_value and not strict_parsing:
continue
nv = name_value.split('=', 1)
if len(nv) != 2:
if strict_parsing:
raise ValueError("bad query field: %r" % (name_value,))
# Handle case of a control-name with no equal sign
if keep_blank_values:
nv.append('')
else:
continue
if len(nv[1]) or keep_blank_values:
name = nv[0].replace('+', ' ')
name = compat_urllib_parse_unquote(
name, encoding=encoding, errors=errors)
name = _coerce_result(name)
value = nv[1].replace('+', ' ')
value = compat_urllib_parse_unquote(
value, encoding=encoding, errors=errors)
value = _coerce_result(value)
r.append((name, value))
return r
def compat_parse_qs(qs, keep_blank_values=False, strict_parsing=False,
encoding='utf-8', errors='replace'):
parsed_result = {}
pairs = _parse_qsl(qs, keep_blank_values, strict_parsing,
encoding=encoding, errors=errors)
for name, value in pairs:
if name in parsed_result:
parsed_result[name].append(value)
else:
parsed_result[name] = [value]
return parsed_result
try:
compat_str = unicode # Python 2
except NameError:
compat_str = str
try:
compat_chr = unichr # Python 2
except NameError:
compat_chr = chr
try:
from xml.etree.ElementTree import ParseError as compat_xml_parse_error
except ImportError: # Python 2.6
from xml.parsers.expat import ExpatError as compat_xml_parse_error
try:
from shlex import quote as shlex_quote
except ImportError: # Python < 3.3
def shlex_quote(s):
return "'" + s.replace("'", "'\"'\"'") + "'"
def compat_ord(c):
if type(c) is int: return c
else: return ord(c)
if sys.version_info >= (3, 0):
compat_getenv = os.getenv
compat_expanduser = os.path.expanduser
else:
# Environment variables should be decoded with filesystem encoding.
# Otherwise it will fail if any non-ASCII characters present (see #3854 #3217 #2918)
def compat_getenv(key, default=None):
from .utils import get_filesystem_encoding
env = os.getenv(key, default)
if env:
env = env.decode(get_filesystem_encoding())
return env
# HACK: The default implementations of os.path.expanduser from cpython do not decode
# environment variables with filesystem encoding. We will work around this by
# providing adjusted implementations.
# The following are os.path.expanduser implementations from cpython 2.7.8 stdlib
# for different platforms with correct environment variables decoding.
if os.name == 'posix':
def compat_expanduser(path):
"""Expand ~ and ~user constructions. If user or $HOME is unknown,
do nothing."""
if not path.startswith('~'):
return path
i = path.find('/', 1)
if i < 0:
i = len(path)
if i == 1:
if 'HOME' not in os.environ:
import pwd
userhome = pwd.getpwuid(os.getuid()).pw_dir
else:
userhome = compat_getenv('HOME')
else:
import pwd
try:
pwent = pwd.getpwnam(path[1:i])
except KeyError:
return path
userhome = pwent.pw_dir
userhome = userhome.rstrip('/')
return (userhome + path[i:]) or '/'
elif os.name == 'nt' or os.name == 'ce':
def compat_expanduser(path):
"""Expand ~ and ~user constructs.
If user or $HOME is unknown, do nothing."""
if path[:1] != '~':
return path
i, n = 1, len(path)
while i < n and path[i] not in '/\\':
i = i + 1
if 'HOME' in os.environ:
userhome = compat_getenv('HOME')
elif 'USERPROFILE' in os.environ:
userhome = compat_getenv('USERPROFILE')
elif not 'HOMEPATH' in os.environ:
return path
else:
try:
drive = compat_getenv('HOMEDRIVE')
except KeyError:
drive = ''
userhome = os.path.join(drive, compat_getenv('HOMEPATH'))
if i != 1: #~user
userhome = os.path.join(os.path.dirname(userhome), path[1:i])
return userhome + path[i:]
else:
compat_expanduser = os.path.expanduser
if sys.version_info < (3, 0):
def compat_print(s):
from .utils import preferredencoding
print(s.encode(preferredencoding(), 'xmlcharrefreplace'))
else:
def compat_print(s):
assert type(s) == type(u'')
print(s)
try:
subprocess_check_output = subprocess.check_output
except AttributeError:
def subprocess_check_output(*args, **kwargs):
assert 'input' not in kwargs
p = subprocess.Popen(*args, stdout=subprocess.PIPE, **kwargs)
output, _ = p.communicate()
ret = p.poll()
if ret:
raise subprocess.CalledProcessError(ret, p.args, output=output)
return output
if sys.version_info < (3, 0) and sys.platform == 'win32':
def compat_getpass(prompt, *args, **kwargs):
if isinstance(prompt, compat_str):
from .utils import preferredencoding
prompt = prompt.encode(preferredencoding())
return getpass.getpass(prompt, *args, **kwargs)
else:
compat_getpass = getpass.getpass
# Old 2.6 and 2.7 releases require kwargs to be bytes
try:
(lambda x: x)(**{'x': 0})
except TypeError:
def compat_kwargs(kwargs):
return dict((bytes(k), v) for k, v in kwargs.items())
else:
compat_kwargs = lambda kwargs: kwargs
# Fix https://github.com/rg3/youtube-dl/issues/4223
# See http://bugs.python.org/issue9161 for what is broken
def workaround_optparse_bug9161():
op = optparse.OptionParser()
og = optparse.OptionGroup(op, 'foo')
try:
og.add_option('-t')
except TypeError as te:
real_add_option = optparse.OptionGroup.add_option
def _compat_add_option(self, *args, **kwargs):
enc = lambda v: (
v.encode('ascii', 'replace') if isinstance(v, compat_str)
else v)
bargs = [enc(a) for a in args]
bkwargs = dict(
(k, enc(v)) for k, v in kwargs.items())
return real_add_option(self, *bargs, **bkwargs)
optparse.OptionGroup.add_option = _compat_add_option
__all__ = [
'compat_HTTPError',
'compat_chr',
'compat_cookiejar',
'compat_expanduser',
'compat_getenv',
'compat_getpass',
'compat_html_entities',
'compat_html_parser',
'compat_http_client',
'compat_kwargs',
'compat_ord',
'compat_parse_qs',
'compat_print',
'compat_str',
'compat_subprocess_get_DEVNULL',
'compat_urllib_error',
'compat_urllib_parse',
'compat_urllib_parse_unquote',
'compat_urllib_parse_urlparse',
'compat_urllib_request',
'compat_urlparse',
'compat_urlretrieve',
'compat_xml_parse_error',
'shlex_quote',
'subprocess_check_output',
'workaround_optparse_bug9161',
]

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
import os import os
import re import re
import sys import sys
@@ -159,14 +161,14 @@ class FileDownloader(object):
def temp_name(self, filename): def temp_name(self, filename):
"""Returns a temporary filename for the given filename.""" """Returns a temporary filename for the given filename."""
if self.params.get('nopart', False) or filename == u'-' or \ if self.params.get('nopart', False) or filename == '-' or \
(os.path.exists(encodeFilename(filename)) and not os.path.isfile(encodeFilename(filename))): (os.path.exists(encodeFilename(filename)) and not os.path.isfile(encodeFilename(filename))):
return filename return filename
return filename + u'.part' return filename + '.part'
def undo_temp_name(self, filename): def undo_temp_name(self, filename):
if filename.endswith(u'.part'): if filename.endswith('.part'):
return filename[:-len(u'.part')] return filename[:-len('.part')]
return filename return filename
def try_rename(self, old_filename, new_filename): def try_rename(self, old_filename, new_filename):
@@ -175,7 +177,7 @@ class FileDownloader(object):
return return
os.rename(encodeFilename(old_filename), encodeFilename(new_filename)) os.rename(encodeFilename(old_filename), encodeFilename(new_filename))
except (IOError, OSError) as err: except (IOError, OSError) as err:
self.report_error(u'unable to rename file: %s' % compat_str(err)) self.report_error('unable to rename file: %s' % compat_str(err))
def try_utime(self, filename, last_modified_hdr): def try_utime(self, filename, last_modified_hdr):
"""Try to set the last-modified time of the given file.""" """Try to set the last-modified time of the given file."""
@@ -200,10 +202,10 @@ class FileDownloader(object):
def report_destination(self, filename): def report_destination(self, filename):
"""Report destination filename.""" """Report destination filename."""
self.to_screen(u'[download] Destination: ' + filename) self.to_screen('[download] Destination: ' + filename)
def _report_progress_status(self, msg, is_last_line=False): def _report_progress_status(self, msg, is_last_line=False):
fullmsg = u'[download] ' + msg fullmsg = '[download] ' + msg
if self.params.get('progress_with_newline', False): if self.params.get('progress_with_newline', False):
self.to_screen(fullmsg) self.to_screen(fullmsg)
else: else:
@@ -211,13 +213,13 @@ class FileDownloader(object):
prev_len = getattr(self, '_report_progress_prev_line_length', prev_len = getattr(self, '_report_progress_prev_line_length',
0) 0)
if prev_len > len(fullmsg): if prev_len > len(fullmsg):
fullmsg += u' ' * (prev_len - len(fullmsg)) fullmsg += ' ' * (prev_len - len(fullmsg))
self._report_progress_prev_line_length = len(fullmsg) self._report_progress_prev_line_length = len(fullmsg)
clear_line = u'\r' clear_line = '\r'
else: else:
clear_line = (u'\r\x1b[K' if sys.stderr.isatty() else u'\r') clear_line = ('\r\x1b[K' if sys.stderr.isatty() else '\r')
self.to_screen(clear_line + fullmsg, skip_eol=not is_last_line) self.to_screen(clear_line + fullmsg, skip_eol=not is_last_line)
self.to_console_title(u'youtube-dl ' + msg) self.to_console_title('youtube-dl ' + msg)
def report_progress(self, percent, data_len_str, speed, eta): def report_progress(self, percent, data_len_str, speed, eta):
"""Report download progress.""" """Report download progress."""
@@ -233,7 +235,7 @@ class FileDownloader(object):
percent_str = 'Unknown %' percent_str = 'Unknown %'
speed_str = self.format_speed(speed) speed_str = self.format_speed(speed)
msg = (u'%s of %s at %s ETA %s' % msg = ('%s of %s at %s ETA %s' %
(percent_str, data_len_str, speed_str, eta_str)) (percent_str, data_len_str, speed_str, eta_str))
self._report_progress_status(msg) self._report_progress_status(msg)
@@ -243,37 +245,37 @@ class FileDownloader(object):
downloaded_str = format_bytes(downloaded_data_len) downloaded_str = format_bytes(downloaded_data_len)
speed_str = self.format_speed(speed) speed_str = self.format_speed(speed)
elapsed_str = FileDownloader.format_seconds(elapsed) elapsed_str = FileDownloader.format_seconds(elapsed)
msg = u'%s at %s (%s)' % (downloaded_str, speed_str, elapsed_str) msg = '%s at %s (%s)' % (downloaded_str, speed_str, elapsed_str)
self._report_progress_status(msg) self._report_progress_status(msg)
def report_finish(self, data_len_str, tot_time): def report_finish(self, data_len_str, tot_time):
"""Report download finished.""" """Report download finished."""
if self.params.get('noprogress', False): if self.params.get('noprogress', False):
self.to_screen(u'[download] Download completed') self.to_screen('[download] Download completed')
else: else:
self._report_progress_status( self._report_progress_status(
(u'100%% of %s in %s' % ('100%% of %s in %s' %
(data_len_str, self.format_seconds(tot_time))), (data_len_str, self.format_seconds(tot_time))),
is_last_line=True) is_last_line=True)
def report_resuming_byte(self, resume_len): def report_resuming_byte(self, resume_len):
"""Report attempt to resume at given byte.""" """Report attempt to resume at given byte."""
self.to_screen(u'[download] Resuming download at byte %s' % resume_len) self.to_screen('[download] Resuming download at byte %s' % resume_len)
def report_retry(self, count, retries): def report_retry(self, count, retries):
"""Report retry in case of HTTP error 5xx""" """Report retry in case of HTTP error 5xx"""
self.to_screen(u'[download] Got server HTTP error. Retrying (attempt %d of %d)...' % (count, retries)) self.to_screen('[download] Got server HTTP error. Retrying (attempt %d of %d)...' % (count, retries))
def report_file_already_downloaded(self, file_name): def report_file_already_downloaded(self, file_name):
"""Report file has already been fully downloaded.""" """Report file has already been fully downloaded."""
try: try:
self.to_screen(u'[download] %s has already been downloaded' % file_name) self.to_screen('[download] %s has already been downloaded' % file_name)
except UnicodeEncodeError: except UnicodeEncodeError:
self.to_screen(u'[download] The file has already been downloaded') self.to_screen('[download] The file has already been downloaded')
def report_unable_to_resume(self): def report_unable_to_resume(self):
"""Report it was impossible to resume download.""" """Report it was impossible to resume download."""
self.to_screen(u'[download] Unable to resume') self.to_screen('[download] Unable to resume')
def download(self, filename, info_dict): def download(self, filename, info_dict):
"""Download to a filename using the info from info_dict """Download to a filename using the info from info_dict
@@ -293,7 +295,7 @@ class FileDownloader(object):
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
"""Real download process. Redefine in subclasses.""" """Real download process. Redefine in subclasses."""
raise NotImplementedError(u'This method must be implemented by subclasses') raise NotImplementedError('This method must be implemented by subclasses')
def _hook_progress(self, status): def _hook_progress(self, status):
for ph in self._progress_hooks: for ph in self._progress_hooks:

View File

@@ -244,9 +244,16 @@ class F4mFD(FileDownloader):
lambda f: int(f[0]) == requested_bitrate, formats))[0] lambda f: int(f[0]) == requested_bitrate, formats))[0]
base_url = compat_urlparse.urljoin(man_url, media.attrib['url']) base_url = compat_urlparse.urljoin(man_url, media.attrib['url'])
bootstrap = base64.b64decode(doc.find(_add_ns('bootstrapInfo')).text) bootstrap_node = doc.find(_add_ns('bootstrapInfo'))
if bootstrap_node.text is None:
bootstrap_url = compat_urlparse.urljoin(
base_url, bootstrap_node.attrib['url'])
bootstrap = self.ydl.urlopen(bootstrap_url).read()
else:
bootstrap = base64.b64decode(bootstrap_node.text)
metadata = base64.b64decode(media.find(_add_ns('metadata')).text) metadata = base64.b64decode(media.find(_add_ns('metadata')).text)
boot_info = read_bootstrap_info(bootstrap) boot_info = read_bootstrap_info(bootstrap)
fragments_list = build_fragments_list(boot_info) fragments_list = build_fragments_list(boot_info)
if self.params.get('test', False): if self.params.get('test', False):
# We only download the first fragment # We only download the first fragment

View File

@@ -12,9 +12,15 @@ from ..utils import (
compat_str, compat_str,
encodeFilename, encodeFilename,
format_bytes, format_bytes,
get_exe_version,
) )
def rtmpdump_version():
return get_exe_version(
'rtmpdump', ['--help'], r'(?i)RTMPDump\s*v?([0-9a-zA-Z._-]+)')
class RtmpFD(FileDownloader): class RtmpFD(FileDownloader):
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
def run_rtmpdump(args): def run_rtmpdump(args):

View File

@@ -67,7 +67,6 @@ from .crunchyroll import (
CrunchyrollShowPlaylistIE CrunchyrollShowPlaylistIE
) )
from .cspan import CSpanIE from .cspan import CSpanIE
from .d8 import D8IE
from .dailymotion import ( from .dailymotion import (
DailymotionIE, DailymotionIE,
DailymotionPlaylistIE, DailymotionPlaylistIE,
@@ -128,6 +127,7 @@ from .francetv import (
) )
from .freesound import FreesoundIE from .freesound import FreesoundIE
from .freespeech import FreespeechIE from .freespeech import FreespeechIE
from .freevideo import FreeVideoIE
from .funnyordie import FunnyOrDieIE from .funnyordie import FunnyOrDieIE
from .gamekings import GamekingsIE from .gamekings import GamekingsIE
from .gameone import ( from .gameone import (
@@ -142,6 +142,7 @@ from .generic import GenericIE
from .glide import GlideIE from .glide import GlideIE
from .globo import GloboIE from .globo import GloboIE
from .godtube import GodTubeIE from .godtube import GodTubeIE
from .goldenmoustache import GoldenMoustacheIE
from .golem import GolemIE from .golem import GolemIE
from .googleplus import GooglePlusIE from .googleplus import GooglePlusIE
from .googlesearch import GoogleSearchIE from .googlesearch import GoogleSearchIE
@@ -189,6 +190,7 @@ from .kontrtube import KontrTubeIE
from .krasview import KrasViewIE from .krasview import KrasViewIE
from .ku6 import Ku6IE from .ku6 import Ku6IE
from .la7 import LA7IE from .la7 import LA7IE
from .laola1tv import Laola1TvIE
from .lifenews import LifeNewsIE from .lifenews import LifeNewsIE
from .liveleak import LiveLeakIE from .liveleak import LiveLeakIE
from .livestream import ( from .livestream import (
@@ -251,7 +253,7 @@ from .newstube import NewstubeIE
from .nfb import NFBIE from .nfb import NFBIE
from .nfl import NFLIE from .nfl import NFLIE
from .nhl import NHLIE, NHLVideocenterIE from .nhl import NHLIE, NHLVideocenterIE
from .niconico import NiconicoIE from .niconico import NiconicoIE, NiconicoPlaylistIE
from .ninegag import NineGagIE from .ninegag import NineGagIE
from .noco import NocoIE from .noco import NocoIE
from .normalboots import NormalbootsIE from .normalboots import NormalbootsIE
@@ -280,6 +282,7 @@ from .orf import (
from .parliamentliveuk import ParliamentLiveUKIE from .parliamentliveuk import ParliamentLiveUKIE
from .patreon import PatreonIE from .patreon import PatreonIE
from .pbs import PBSIE from .pbs import PBSIE
from .phoenix import PhoenixIE
from .photobucket import PhotobucketIE from .photobucket import PhotobucketIE
from .planetaplay import PlanetaPlayIE from .planetaplay import PlanetaPlayIE
from .played import PlayedIE from .played import PlayedIE
@@ -293,6 +296,7 @@ from .pornoxo import PornoXOIE
from .promptfile import PromptFileIE from .promptfile import PromptFileIE
from .prosiebensat1 import ProSiebenSat1IE from .prosiebensat1 import ProSiebenSat1IE
from .pyvideo import PyvideoIE from .pyvideo import PyvideoIE
from .quickvid import QuickVidIE
from .radiofrance import RadioFranceIE from .radiofrance import RadioFranceIE
from .rai import RaiIE from .rai import RaiIE
from .rbmaradio import RBMARadioIE from .rbmaradio import RBMARadioIE
@@ -321,6 +325,7 @@ from .sbs import SBSIE
from .scivee import SciVeeIE from .scivee import SciVeeIE
from .screencast import ScreencastIE from .screencast import ScreencastIE
from .servingsys import ServingSysIE from .servingsys import ServingSysIE
from .sexu import SexuIE
from .sexykarma import SexyKarmaIE from .sexykarma import SexyKarmaIE
from .shared import SharedIE from .shared import SharedIE
from .sharesix import ShareSixIE from .sharesix import ShareSixIE
@@ -355,6 +360,7 @@ from .spike import SpikeIE
from .sport5 import Sport5IE from .sport5 import Sport5IE
from .sportbox import SportBoxIE from .sportbox import SportBoxIE
from .sportdeutschland import SportDeutschlandIE from .sportdeutschland import SportDeutschlandIE
from .srmediathek import SRMediathekIE
from .stanfordoc import StanfordOpenClassroomIE from .stanfordoc import StanfordOpenClassroomIE
from .steam import SteamIE from .steam import SteamIE
from .streamcloud import StreamcloudIE from .streamcloud import StreamcloudIE
@@ -418,6 +424,7 @@ from .vesti import VestiIE
from .vevo import VevoIE from .vevo import VevoIE
from .vgtv import VGTVIE from .vgtv import VGTVIE
from .vh1 import VH1IE from .vh1 import VH1IE
from .vice import ViceIE
from .viddler import ViddlerIE from .viddler import ViddlerIE
from .videobam import VideoBamIE from .videobam import VideoBamIE
from .videodetective import VideoDetectiveIE from .videodetective import VideoDetectiveIE

View File

@@ -11,13 +11,13 @@ class ABCIE(InfoExtractor):
_VALID_URL = r'http://www\.abc\.net\.au/news/[^/]+/[^/]+/(?P<id>\d+)' _VALID_URL = r'http://www\.abc\.net\.au/news/[^/]+/[^/]+/(?P<id>\d+)'
_TEST = { _TEST = {
'url': 'http://www.abc.net.au/news/2014-07-25/bringing-asylum-seekers-to-australia-would-give/5624716', 'url': 'http://www.abc.net.au/news/2014-11-05/australia-to-staff-ebola-treatment-centre-in-sierra-leone/5868334',
'md5': 'dad6f8ad011a70d9ddf887ce6d5d0742', 'md5': 'cb3dd03b18455a661071ee1e28344d9f',
'info_dict': { 'info_dict': {
'id': '5624716', 'id': '5868334',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Bringing asylum seekers to Australia would give them right to asylum claims: professor', 'title': 'Australia to help staff Ebola treatment centre in Sierra Leone',
'description': 'md5:ba36fa5e27e5c9251fd929d339aea4af', 'description': 'md5:809ad29c67a05f54eb41f2a105693a67',
}, },
} }

View File

@@ -3,12 +3,13 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_HTTPError, compat_HTTPError,
compat_str, compat_str,
compat_urllib_parse, compat_urllib_parse,
compat_urllib_parse_urlparse, compat_urllib_parse_urlparse,
)
from ..utils import (
ExtractorError, ExtractorError,
) )

View File

@@ -22,7 +22,7 @@ class AllocineIE(InfoExtractor):
'id': '19546517', 'id': '19546517',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Astérix - Le Domaine des Dieux Teaser VF', 'title': 'Astérix - Le Domaine des Dieux Teaser VF',
'description': 'md5:4a754271d9c6f16c72629a8a993ee884', 'description': 'md5:abcd09ce503c6560512c14ebfdb720d2',
'thumbnail': 're:http://.*\.jpg', 'thumbnail': 're:http://.*\.jpg',
}, },
}, { }, {

View File

@@ -4,6 +4,7 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from .generic import GenericIE
from ..utils import ( from ..utils import (
determine_ext, determine_ext,
ExtractorError, ExtractorError,
@@ -12,6 +13,7 @@ from ..utils import (
parse_duration, parse_duration,
unified_strdate, unified_strdate,
xpath_text, xpath_text,
parse_xml,
) )
@@ -54,6 +56,11 @@ class ARDMediathekIE(InfoExtractor):
if '>Der gewünschte Beitrag ist nicht mehr verfügbar.<' in webpage: if '>Der gewünschte Beitrag ist nicht mehr verfügbar.<' in webpage:
raise ExtractorError('Video %s is no longer available' % video_id, expected=True) raise ExtractorError('Video %s is no longer available' % video_id, expected=True)
if re.search(r'[\?&]rss($|[=&])', url):
doc = parse_xml(webpage)
if doc.tag == 'rss':
return GenericIE()._extract_rss(url, video_id, doc)
title = self._html_search_regex( title = self._html_search_regex(
[r'<h1(?:\s+class="boxTopHeadline")?>(.*?)</h1>', [r'<h1(?:\s+class="boxTopHeadline")?>(.*?)</h1>',
r'<meta name="dcterms.title" content="(.*?)"/>', r'<meta name="dcterms.title" content="(.*?)"/>',

View File

@@ -8,10 +8,10 @@ from ..utils import (
ExtractorError, ExtractorError,
find_xpath_attr, find_xpath_attr,
unified_strdate, unified_strdate,
determine_ext,
get_element_by_id, get_element_by_id,
get_element_by_attribute, get_element_by_attribute,
int_or_none, int_or_none,
qualities,
) )
# There are different sources of video in arte.tv, the extraction process # There are different sources of video in arte.tv, the extraction process
@@ -102,79 +102,54 @@ class ArteTVPlus7IE(InfoExtractor):
'upload_date': unified_strdate(upload_date_str), 'upload_date': unified_strdate(upload_date_str),
'thumbnail': player_info.get('programImage') or player_info.get('VTU', {}).get('IUR'), 'thumbnail': player_info.get('programImage') or player_info.get('VTU', {}).get('IUR'),
} }
qfunc = qualities(['HQ', 'MQ', 'EQ', 'SQ'])
all_formats = [] formats = []
for format_id, format_dict in player_info['VSR'].items(): for format_id, format_dict in player_info['VSR'].items():
fmt = dict(format_dict) f = dict(format_dict)
fmt['format_id'] = format_id versionCode = f.get('versionCode')
all_formats.append(fmt)
# Some formats use the m3u8 protocol
all_formats = list(filter(lambda f: f.get('videoFormat') != 'M3U8', all_formats))
def _match_lang(f):
if f.get('versionCode') is None:
return True
# Return true if that format is in the language of the url
if lang == 'fr':
l = 'F'
elif lang == 'de':
l = 'A'
else:
l = lang
regexes = [r'VO?%s' % l, r'VO?.-ST%s' % l]
return any(re.match(r, f['versionCode']) for r in regexes)
# Some formats may not be in the same language as the url
# TODO: Might want not to drop videos that does not match requested language
# but to process those formats with lower precedence
formats = filter(_match_lang, all_formats)
formats = list(formats) # in python3 filter returns an iterator
if not formats:
# Some videos are only available in the 'Originalversion'
# they aren't tagged as being in French or German
# Sometimes there are neither videos of requested lang code
# nor original version videos available
# For such cases we just take all_formats as is
formats = all_formats
if not formats:
raise ExtractorError('The formats list is empty')
if re.match(r'[A-Z]Q', formats[0]['quality']) is not None: langcode = {
def sort_key(f): 'fr': 'F',
return ['HQ', 'MQ', 'EQ', 'SQ'].index(f['quality']) 'de': 'A',
else: }.get(lang, lang)
def sort_key(f): lang_rexs = [r'VO?%s' % langcode, r'VO?.-ST%s' % langcode]
versionCode = f.get('versionCode') lang_pref = (
if versionCode is None: None if versionCode is None else (
versionCode = '' 10 if any(re.match(r, versionCode) for r in lang_rexs)
return ( else -10))
# Sort first by quality source_pref = 0
int(f.get('height', -1)), if versionCode is not None:
int(f.get('bitrate', -1)), # The original version with subtitles has lower relevance
# The original version with subtitles has lower relevance if re.match(r'VO-ST(F|A)', versionCode):
re.match(r'VO-ST(F|A)', versionCode) is None, source_pref -= 10
# The version with sourds/mal subtitles has also lower relevance # The version with sourds/mal subtitles has also lower relevance
re.match(r'VO?(F|A)-STM\1', versionCode) is None, elif re.match(r'VO?(F|A)-STM\1', versionCode):
# Prefer http downloads over m3u8 source_pref -= 9
0 if f['url'].endswith('m3u8') else 1, format = {
) 'format_id': format_id,
formats = sorted(formats, key=sort_key) 'preference': -10 if f.get('videoFormat') == 'M3U8' else None,
def _format(format_info): 'language_preference': lang_pref,
info = { 'format_note': '%s, %s' % (f.get('versionCode'), f.get('versionLibelle')),
'format_id': format_info['format_id'], 'width': int_or_none(f.get('width')),
'format_note': '%s, %s' % (format_info.get('versionCode'), format_info.get('versionLibelle')), 'height': int_or_none(f.get('height')),
'width': int_or_none(format_info.get('width')), 'tbr': int_or_none(f.get('bitrate')),
'height': int_or_none(format_info.get('height')), 'quality': qfunc(f['quality']),
'tbr': int_or_none(format_info.get('bitrate')), 'source_preference': source_pref,
} }
if format_info['mediaType'] == 'rtmp':
info['url'] = format_info['streamer']
info['play_path'] = 'mp4:' + format_info['url']
info['ext'] = 'flv'
else:
info['url'] = format_info['url']
info['ext'] = determine_ext(info['url'])
return info
info_dict['formats'] = [_format(f) for f in formats]
if f.get('mediaType') == 'rtmp':
format['url'] = f['streamer']
format['play_path'] = 'mp4:' + f['url']
format['ext'] = 'flv'
else:
format['url'] = f['url']
formats.append(format)
self._sort_formats(formats)
info_dict['formats'] = formats
return info_dict return info_dict

View File

@@ -4,7 +4,7 @@ from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from .soundcloud import SoundcloudIE from .soundcloud import SoundcloudIE
from ..utils import ExtractorError from ..utils import ExtractorError
import datetime
import time import time

View File

@@ -24,8 +24,7 @@ class AUEngineIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
title = self._html_search_regex(r'<title>(?P<title>.+?)</title>', webpage, 'title') title = self._html_search_regex(r'<title>(?P<title>.+?)</title>', webpage, 'title')

View File

@@ -110,20 +110,25 @@ class BandcampAlbumIE(InfoExtractor):
'url': 'http://blazo.bandcamp.com/album/jazz-format-mixtape-vol-1', 'url': 'http://blazo.bandcamp.com/album/jazz-format-mixtape-vol-1',
'playlist': [ 'playlist': [
{ {
'file': '1353101989.mp3',
'md5': '39bc1eded3476e927c724321ddf116cf', 'md5': '39bc1eded3476e927c724321ddf116cf',
'info_dict': { 'info_dict': {
'id': '1353101989',
'ext': 'mp3',
'title': 'Intro', 'title': 'Intro',
} }
}, },
{ {
'file': '38097443.mp3',
'md5': '1a2c32e2691474643e912cc6cd4bffaa', 'md5': '1a2c32e2691474643e912cc6cd4bffaa',
'info_dict': { 'info_dict': {
'id': '38097443',
'ext': 'mp3',
'title': 'Kero One - Keep It Alive (Blazo remix)', 'title': 'Kero One - Keep It Alive (Blazo remix)',
} }
}, },
], ],
'info_dict': {
'title': 'Jazz Format Mixtape vol.1',
},
'params': { 'params': {
'playlistend': 2 'playlistend': 2
}, },

View File

@@ -71,11 +71,12 @@ class BlipTVIE(SubtitlesInfoExtractor):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
lookup_id = mobj.group('lookup_id') lookup_id = mobj.group('lookup_id')
# See https://github.com/rg3/youtube-dl/issues/857 # See https://github.com/rg3/youtube-dl/issues/857 and
# https://github.com/rg3/youtube-dl/issues/4197
if lookup_id: if lookup_id:
info_page = self._download_webpage( info_page = self._download_webpage(
'http://blip.tv/play/%s.x?p=1' % lookup_id, lookup_id, 'Resolving lookup id') 'http://blip.tv/play/%s.x?p=1' % lookup_id, lookup_id, 'Resolving lookup id')
video_id = self._search_regex(r'data-episode-id="([0-9]+)', info_page, 'video_id') video_id = self._search_regex(r'config\.id\s*=\s*"([0-9]+)', info_page, 'video_id')
else: else:
video_id = mobj.group('id') video_id = mobj.group('id')

View File

@@ -1,8 +1,6 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,

View File

@@ -14,6 +14,7 @@ from ..utils import (
compat_str, compat_str,
compat_urllib_request, compat_urllib_request,
compat_parse_qs, compat_parse_qs,
compat_urllib_parse_urlparse,
determine_ext, determine_ext,
ExtractorError, ExtractorError,
@@ -23,7 +24,7 @@ from ..utils import (
class BrightcoveIE(InfoExtractor): class BrightcoveIE(InfoExtractor):
_VALID_URL = r'https?://.*brightcove\.com/(services|viewer).*\?(?P<query>.*)' _VALID_URL = r'https?://.*brightcove\.com/(services|viewer).*?\?(?P<query>.*)'
_FEDERATED_URL_TEMPLATE = 'http://c.brightcove.com/services/viewer/htmlFederated?%s' _FEDERATED_URL_TEMPLATE = 'http://c.brightcove.com/services/viewer/htmlFederated?%s'
_TESTS = [ _TESTS = [
@@ -260,11 +261,19 @@ class BrightcoveIE(InfoExtractor):
formats = [] formats = []
for rend in renditions: for rend in renditions:
url = rend['defaultURL'] url = rend['defaultURL']
if not url:
continue
if rend['remote']: if rend['remote']:
# This type of renditions are served through akamaihd.net, url_comp = compat_urllib_parse_urlparse(url)
# but they don't use f4m manifests if url_comp.path.endswith('.m3u8'):
url = url.replace('control/', '') + '?&v=3.3.0&fp=13&r=FEEFJ&g=RTSJIMBMPFPB' formats.extend(
ext = 'flv' self._extract_m3u8_formats(url, info['id'], 'mp4'))
continue
elif 'akamaihd.net' in url_comp.netloc:
# This type of renditions are served through
# akamaihd.net, but they don't use f4m manifests
url = url.replace('control/', '') + '?&v=3.3.0&fp=13&r=FEEFJ&g=RTSJIMBMPFPB'
ext = 'flv'
else: else:
ext = determine_ext(url) ext = determine_ext(url)
size = rend.get('size') size = rend.get('size')

View File

@@ -10,12 +10,12 @@ from ..utils import ExtractorError
class BYUtvIE(InfoExtractor): class BYUtvIE(InfoExtractor):
_VALID_URL = r'^https?://(?:www\.)?byutv.org/watch/[0-9a-f-]+/(?P<video_id>[^/?#]+)' _VALID_URL = r'^https?://(?:www\.)?byutv.org/watch/[0-9a-f-]+/(?P<video_id>[^/?#]+)'
_TEST = { _TEST = {
'url': 'http://www.byutv.org/watch/44e80f7b-e3ba-43ba-8c51-b1fd96c94a79/granite-flats-talking', 'url': 'http://www.byutv.org/watch/6587b9a3-89d2-42a6-a7f7-fd2f81840a7d/studio-c-season-5-episode-5',
'info_dict': { 'info_dict': {
'id': 'granite-flats-talking', 'id': 'studio-c-season-5-episode-5',
'ext': 'mp4', 'ext': 'mp4',
'description': 'md5:4e9a7ce60f209a33eca0ac65b4918e1c', 'description': 'md5:5438d33774b6bdc662f9485a340401cc',
'title': 'Talking', 'title': 'Season 5 Episode 5',
'thumbnail': 're:^https?://.*promo.*' 'thumbnail': 're:^https?://.*promo.*'
}, },
'params': { 'params': {

View File

@@ -7,15 +7,21 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
unified_strdate, unified_strdate,
url_basename, url_basename,
qualities,
) )
class CanalplusIE(InfoExtractor): class CanalplusIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.canalplus\.fr/.*?/(?P<path>.*)|player\.canalplus\.fr/#/(?P<id>[0-9]+))' IE_DESC = 'canalplus.fr, piwiplus.fr and d8.tv'
_VIDEO_INFO_TEMPLATE = 'http://service.canal-plus.com/video/rest/getVideosLiees/cplus/%s' _VALID_URL = r'https?://(?:www\.(?P<site>canalplus\.fr|piwiplus\.fr|d8\.tv)/.*?/(?P<path>.*)|player\.canalplus\.fr/#/(?P<id>[0-9]+))'
IE_NAME = 'canalplus.fr' _VIDEO_INFO_TEMPLATE = 'http://service.canal-plus.com/video/rest/getVideosLiees/%s/%s'
_SITE_ID_MAP = {
'canalplus.fr': 'cplus',
'piwiplus.fr': 'teletoon',
'd8.tv': 'd8',
}
_TEST = { _TESTS = [{
'url': 'http://www.canalplus.fr/c-infos-documentaires/pid1830-c-zapping.html?vid=922470', 'url': 'http://www.canalplus.fr/c-infos-documentaires/pid1830-c-zapping.html?vid=922470',
'md5': '3db39fb48b9685438ecf33a1078023e4', 'md5': '3db39fb48b9685438ecf33a1078023e4',
'info_dict': { 'info_dict': {
@@ -25,36 +31,73 @@ class CanalplusIE(InfoExtractor):
'description': 'Le meilleur de toutes les chaînes, tous les jours.\nEmission du 26 août 2013', 'description': 'Le meilleur de toutes les chaînes, tous les jours.\nEmission du 26 août 2013',
'upload_date': '20130826', 'upload_date': '20130826',
}, },
} }, {
'url': 'http://www.piwiplus.fr/videos-piwi/pid1405-le-labyrinthe-boing-super-ranger.html?vid=1108190',
'info_dict': {
'id': '1108190',
'ext': 'flv',
'title': 'Le labyrinthe - Boing super ranger',
'description': 'md5:4cea7a37153be42c1ba2c1d3064376ff',
'upload_date': '20140724',
},
'skip': 'Only works from France',
}, {
'url': 'http://www.d8.tv/d8-docs-mags/pid6589-d8-campagne-intime.html',
'info_dict': {
'id': '966289',
'ext': 'flv',
'title': 'Campagne intime - Documentaire exceptionnel',
'description': 'md5:d2643b799fb190846ae09c61e59a859f',
'upload_date': '20131108',
},
'skip': 'videos get deleted after a while',
}]
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
video_id = mobj.groupdict().get('id') video_id = mobj.groupdict().get('id')
site_id = self._SITE_ID_MAP[mobj.group('site') or 'canal']
# Beware, some subclasses do not define an id group # Beware, some subclasses do not define an id group
display_id = url_basename(mobj.group('path')) display_id = url_basename(mobj.group('path'))
if video_id is None: if video_id is None:
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
video_id = self._search_regex(r'<canal:player videoId="(\d+)"', webpage, 'video id') video_id = self._search_regex(
r'<canal:player[^>]+?videoId="(\d+)"', webpage, 'video id')
info_url = self._VIDEO_INFO_TEMPLATE % video_id info_url = self._VIDEO_INFO_TEMPLATE % (site_id, video_id)
doc = self._download_xml(info_url, video_id, 'Downloading video XML') doc = self._download_xml(info_url, video_id, 'Downloading video XML')
video_info = [video for video in doc if video.find('ID').text == video_id][0] video_info = [video for video in doc if video.find('ID').text == video_id][0]
media = video_info.find('MEDIA') media = video_info.find('MEDIA')
infos = video_info.find('INFOS') infos = video_info.find('INFOS')
preferences = ['MOBILE', 'BAS_DEBIT', 'HAUT_DEBIT', 'HD', 'HLS', 'HDS'] preference = qualities(['MOBILE', 'BAS_DEBIT', 'HAUT_DEBIT', 'HD', 'HLS', 'HDS'])
formats = [ formats = []
{ for fmt in media.find('VIDEOS'):
'url': fmt.text + '?hdcore=2.11.3' if fmt.tag == 'HDS' else fmt.text, format_url = fmt.text
'format_id': fmt.tag, if not format_url:
'ext': 'mp4' if fmt.tag == 'HLS' else 'flv', continue
'preference': preferences.index(fmt.tag) if fmt.tag in preferences else -1, format_id = fmt.tag
} for fmt in media.find('VIDEOS') if fmt.text if format_id == 'HLS':
] hls_formats = self._extract_m3u8_formats(format_url, video_id, 'flv')
for fmt in hls_formats:
fmt['preference'] = preference(format_id)
formats.extend(hls_formats)
elif format_id == 'HDS':
hds_formats = self._extract_f4m_formats(format_url + '?hdcore=2.11.3', video_id)
for fmt in hds_formats:
fmt['preference'] = preference(format_id)
formats.extend(hds_formats)
else:
formats.append({
'url': format_url,
'format_id': format_id,
'preference': preference(format_id),
})
self._sort_formats(formats) self._sort_formats(formats)
return { return {

View File

@@ -27,7 +27,7 @@ class Channel9IE(InfoExtractor):
'title': 'Developer Kick-Off Session: Stuff We Love', 'title': 'Developer Kick-Off Session: Stuff We Love',
'description': 'md5:c08d72240b7c87fcecafe2692f80e35f', 'description': 'md5:c08d72240b7c87fcecafe2692f80e35f',
'duration': 4576, 'duration': 4576,
'thumbnail': 'http://media.ch9.ms/ch9/9d51/03902f2d-fc97-4d3c-b195-0bfe15a19d51/KOS002_220.jpg', 'thumbnail': 'http://video.ch9.ms/ch9/9d51/03902f2d-fc97-4d3c-b195-0bfe15a19d51/KOS002_220.jpg',
'session_code': 'KOS002', 'session_code': 'KOS002',
'session_day': 'Day 1', 'session_day': 'Day 1',
'session_room': 'Arena 1A', 'session_room': 'Arena 1A',
@@ -43,7 +43,7 @@ class Channel9IE(InfoExtractor):
'title': 'Self-service BI with Power BI - nuclear testing', 'title': 'Self-service BI with Power BI - nuclear testing',
'description': 'md5:d1e6ecaafa7fb52a2cacdf9599829f5b', 'description': 'md5:d1e6ecaafa7fb52a2cacdf9599829f5b',
'duration': 1540, 'duration': 1540,
'thumbnail': 'http://media.ch9.ms/ch9/87e1/0300391f-a455-4c72-bec3-4422f19287e1/selfservicenuk_512.jpg', 'thumbnail': 'http://video.ch9.ms/ch9/87e1/0300391f-a455-4c72-bec3-4422f19287e1/selfservicenuk_512.jpg',
'authors': [ 'Mike Wilmot' ], 'authors': [ 'Mike Wilmot' ],
}, },
} }
@@ -94,7 +94,7 @@ class Channel9IE(InfoExtractor):
def _extract_title(self, html): def _extract_title(self, html):
title = self._html_search_meta('title', html, 'title') title = self._html_search_meta('title', html, 'title')
if title is None: if title is None:
title = self._og_search_title(html) title = self._og_search_title(html)
TITLE_SUFFIX = ' (Channel 9)' TITLE_SUFFIX = ' (Channel 9)'
if title is not None and title.endswith(TITLE_SUFFIX): if title is not None and title.endswith(TITLE_SUFFIX):
@@ -115,7 +115,7 @@ class Channel9IE(InfoExtractor):
return self._html_search_meta('description', html, 'description') return self._html_search_meta('description', html, 'description')
def _extract_duration(self, html): def _extract_duration(self, html):
m = re.search(r'data-video_duration="(?P<hours>\d{2}):(?P<minutes>\d{2}):(?P<seconds>\d{2})"', html) m = re.search(r'"length": *"(?P<hours>\d{2}):(?P<minutes>\d{2}):(?P<seconds>\d{2})"', html)
return ((int(m.group('hours')) * 60 * 60) + (int(m.group('minutes')) * 60) + int(m.group('seconds'))) if m else None return ((int(m.group('hours')) * 60 * 60) + (int(m.group('minutes')) * 60) + int(m.group('seconds'))) if m else None
def _extract_slides(self, html): def _extract_slides(self, html):
@@ -167,7 +167,7 @@ class Channel9IE(InfoExtractor):
return re.findall(r'<a href="/Events/Speakers/[^"]+">([^<]+)</a>', html) return re.findall(r'<a href="/Events/Speakers/[^"]+">([^<]+)</a>', html)
def _extract_content(self, html, content_path): def _extract_content(self, html, content_path):
# Look for downloadable content # Look for downloadable content
formats = self._formats_from_html(html) formats = self._formats_from_html(html)
slides = self._extract_slides(html) slides = self._extract_slides(html)
zip_ = self._extract_zip(html) zip_ = self._extract_zip(html)
@@ -258,16 +258,17 @@ class Channel9IE(InfoExtractor):
webpage = self._download_webpage(url, content_path, 'Downloading web page') webpage = self._download_webpage(url, content_path, 'Downloading web page')
page_type_m = re.search(r'<meta name="Search.PageType" content="(?P<pagetype>[^"]+)"/>', webpage) page_type_m = re.search(r'<meta name="WT.entryid" content="(?P<pagetype>[^:]+)[^"]+"/>', webpage)
if page_type_m is None: if page_type_m is not None:
raise ExtractorError('Search.PageType not found, don\'t know how to process this page', expected=True) page_type = page_type_m.group('pagetype')
if page_type == 'Entry': # Any 'item'-like page, may contain downloadable content
return self._extract_entry_item(webpage, content_path)
elif page_type == 'Session': # Event session page, may contain downloadable content
return self._extract_session(webpage, content_path)
elif page_type == 'Event':
return self._extract_list(content_path)
else:
raise ExtractorError('Unexpected WT.entryid %s' % page_type, expected=True)
page_type = page_type_m.group('pagetype') else: # Assuming list
if page_type == 'List': # List page, may contain list of 'item'-like objects
return self._extract_list(content_path) return self._extract_list(content_path)
elif page_type == 'Entry.Item': # Any 'item'-like page, may contain downloadable content
return self._extract_entry_item(webpage, content_path)
elif page_type == 'Session': # Event session page, may contain downloadable content
return self._extract_session(webpage, content_path)
else:
raise ExtractorError('Unexpected Search.PageType %s' % page_type, expected=True)

View File

@@ -42,11 +42,12 @@ class CinemassacreIE(InfoExtractor):
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
video_date = mobj.group('date_Y') + mobj.group('date_m') + mobj.group('date_d') video_date = mobj.group('date_Y') + mobj.group('date_m') + mobj.group('date_d')
mobj = re.search(r'src="(?P<embed_url>http://player\.screenwavemedia\.com/play/[a-zA-Z]+\.php\?[^"]*\bid=(?:Cinemassacre-)?(?P<video_id>.+?))"', webpage) mobj = re.search(r'src="(?P<embed_url>http://player\.screenwavemedia\.com/play/[a-zA-Z]+\.php\?[^"]*\bid=(?P<full_video_id>(?:Cinemassacre-)?(?P<video_id>.+?)))"', webpage)
if not mobj: if not mobj:
raise ExtractorError('Can\'t extract embed url and video id') raise ExtractorError('Can\'t extract embed url and video id')
playerdata_url = mobj.group('embed_url') playerdata_url = mobj.group('embed_url')
video_id = mobj.group('video_id') video_id = mobj.group('video_id')
full_video_id = mobj.group('full_video_id')
video_title = self._html_search_regex( video_title = self._html_search_regex(
r'<title>(?P<title>.+?)\|', webpage, 'title') r'<title>(?P<title>.+?)\|', webpage, 'title')
@@ -59,41 +60,53 @@ class CinemassacreIE(InfoExtractor):
vidurl = self._search_regex( vidurl = self._search_regex(
r'\'vidurl\'\s*:\s*"([^\']+)"', playerdata, 'vidurl').replace('\\/', '/') r'\'vidurl\'\s*:\s*"([^\']+)"', playerdata, 'vidurl').replace('\\/', '/')
vidid = self._search_regex(
r'\'vidid\'\s*:\s*"([^\']+)"', playerdata, 'vidid')
videoserver = self._html_search_regex(
r"'videoserver'\s*:\s*'([^']+)'", playerdata, 'videoserver')
videolist_url = 'http://%s/vod/smil:%s.smil/jwplayer.smil' % (videoserver, vidid) videolist_url = None
videolist = self._download_xml(videolist_url, video_id, 'Downloading videolist XML')
formats = [] mobj = re.search(r"'videoserver'\s*:\s*'(?P<videoserver>[^']+)'", playerdata)
baseurl = vidurl[:vidurl.rfind('/')+1] if mobj:
for video in videolist.findall('.//video'): videoserver = mobj.group('videoserver')
src = video.get('src') mobj = re.search(r'\'vidid\'\s*:\s*"(?P<vidid>[^\']+)"', playerdata)
if not src: vidid = mobj.group('vidid') if mobj else full_video_id
continue videolist_url = 'http://%s/vod/smil:%s.smil/jwplayer.smil' % (videoserver, vidid)
file_ = src.partition(':')[-1] else:
width = int_or_none(video.get('width')) mobj = re.search(r"file\s*:\s*'(?P<smil>http.+?/jwplayer\.smil)'", playerdata)
height = int_or_none(video.get('height')) if mobj:
bitrate = int_or_none(video.get('system-bitrate')) videolist_url = mobj.group('smil')
format = {
'url': baseurl + file_, if videolist_url:
'format_id': src.rpartition('.')[0].rpartition('_')[-1], videolist = self._download_xml(videolist_url, video_id, 'Downloading videolist XML')
} formats = []
if width or height: baseurl = vidurl[:vidurl.rfind('/')+1]
format.update({ for video in videolist.findall('.//video'):
'tbr': bitrate // 1000 if bitrate else None, src = video.get('src')
'width': width, if not src:
'height': height, continue
}) file_ = src.partition(':')[-1]
else: width = int_or_none(video.get('width'))
format.update({ height = int_or_none(video.get('height'))
'abr': bitrate // 1000 if bitrate else None, bitrate = int_or_none(video.get('system-bitrate'))
'vcodec': 'none', format = {
}) 'url': baseurl + file_,
formats.append(format) 'format_id': src.rpartition('.')[0].rpartition('_')[-1],
self._sort_formats(formats) }
if width or height:
format.update({
'tbr': bitrate // 1000 if bitrate else None,
'width': width,
'height': height,
})
else:
format.update({
'abr': bitrate // 1000 if bitrate else None,
'vcodec': 'none',
})
formats.append(format)
self._sort_formats(formats)
else:
formats = [{
'url': vidurl,
}]
return { return {
'id': video_id, 'id': video_id,

View File

@@ -4,7 +4,6 @@ import json
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import int_or_none
_translation_table = { _translation_table = {
@@ -39,9 +38,7 @@ class CliphunterIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
video_title = self._search_regex( video_title = self._search_regex(

View File

@@ -4,14 +4,16 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
ExtractorError,
compat_parse_qs, compat_parse_qs,
compat_urllib_parse, compat_urllib_parse,
remove_end,
HEADRequest,
compat_HTTPError, compat_HTTPError,
) )
from ..utils import (
ExtractorError,
HEADRequest,
remove_end,
)
class CloudyIE(InfoExtractor): class CloudyIE(InfoExtractor):

View File

@@ -16,9 +16,10 @@ class CNNIE(InfoExtractor):
_TESTS = [{ _TESTS = [{
'url': 'http://edition.cnn.com/video/?/video/sports/2013/06/09/nadal-1-on-1.cnn', 'url': 'http://edition.cnn.com/video/?/video/sports/2013/06/09/nadal-1-on-1.cnn',
'file': 'sports_2013_06_09_nadal-1-on-1.cnn.mp4',
'md5': '3e6121ea48df7e2259fe73a0628605c4', 'md5': '3e6121ea48df7e2259fe73a0628605c4',
'info_dict': { 'info_dict': {
'id': 'sports_2013_06_09_nadal-1-on-1.cnn',
'ext': 'mp4',
'title': 'Nadal wins 8th French Open title', 'title': 'Nadal wins 8th French Open title',
'description': 'World Sport\'s Amanda Davies chats with 2013 French Open champion Rafael Nadal.', 'description': 'World Sport\'s Amanda Davies chats with 2013 French Open champion Rafael Nadal.',
'duration': 135, 'duration': 135,
@@ -27,9 +28,10 @@ class CNNIE(InfoExtractor):
}, },
{ {
"url": "http://edition.cnn.com/video/?/video/us/2013/08/21/sot-student-gives-epic-speech.georgia-institute-of-technology&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+rss%2Fcnn_topstories+%28RSS%3A+Top+Stories%29", "url": "http://edition.cnn.com/video/?/video/us/2013/08/21/sot-student-gives-epic-speech.georgia-institute-of-technology&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+rss%2Fcnn_topstories+%28RSS%3A+Top+Stories%29",
"file": "us_2013_08_21_sot-student-gives-epic-speech.georgia-institute-of-technology.mp4",
"md5": "b5cc60c60a3477d185af8f19a2a26f4e", "md5": "b5cc60c60a3477d185af8f19a2a26f4e",
"info_dict": { "info_dict": {
'id': 'us/2013/08/21/sot-student-gives-epic-speech.georgia-institute-of-technology',
'ext': 'mp4',
"title": "Student's epic speech stuns new freshmen", "title": "Student's epic speech stuns new freshmen",
"description": "A Georgia Tech student welcomes the incoming freshmen with an epic speech backed by music from \"2001: A Space Odyssey.\"", "description": "A Georgia Tech student welcomes the incoming freshmen with an epic speech backed by music from \"2001: A Space Odyssey.\"",
"upload_date": "20130821", "upload_date": "20130821",

View File

@@ -31,7 +31,7 @@ class ComedyCentralIE(MTVServicesInfoExtractor):
} }
class ComedyCentralShowsIE(InfoExtractor): class ComedyCentralShowsIE(MTVServicesInfoExtractor):
IE_DESC = 'The Daily Show / The Colbert Report' IE_DESC = 'The Daily Show / The Colbert Report'
# urls can be abbreviations like :thedailyshow or :colbert # urls can be abbreviations like :thedailyshow or :colbert
# urls for episodes like: # urls for episodes like:
@@ -109,14 +109,6 @@ class ComedyCentralShowsIE(InfoExtractor):
'400': (384, 216), '400': (384, 216),
} }
@staticmethod
def _transform_rtmp_url(rtmp_video_url):
m = re.match(r'^rtmpe?://.*?/(?P<finalid>gsp\.comedystor/.*)$', rtmp_video_url)
if not m:
raise ExtractorError('Cannot transform RTMP url')
base = 'http://mtvnmobile.vo.llnwd.net/kip0/_pxn=1+_pxI0=Ripod-h264+_pxL0=undefined+_pxM0=+_pxK=18639+_pxE=mp4/44620/mtvnorigin/'
return base + m.group('finalid')
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url, re.VERBOSE) mobj = re.match(self._VALID_URL, url, re.VERBOSE)
if mobj is None: if mobj is None:
@@ -212,9 +204,6 @@ class ComedyCentralShowsIE(InfoExtractor):
'ext': self._video_extensions.get(format, 'mp4'), 'ext': self._video_extensions.get(format, 'mp4'),
'height': h, 'height': h,
'width': w, 'width': w,
'format_note': 'HTTP 400 at the moment (patches welcome!)',
'preference': -100,
}) })
formats.append({ formats.append({
'format_id': 'rtmp-%s' % format, 'format_id': 'rtmp-%s' % format,

View File

@@ -12,13 +12,14 @@ import sys
import time import time
import xml.etree.ElementTree import xml.etree.ElementTree
from ..utils import ( from ..compat import (
compat_http_client, compat_http_client,
compat_urllib_error, compat_urllib_error,
compat_urllib_parse_urlparse, compat_urllib_parse_urlparse,
compat_urlparse, compat_urlparse,
compat_str, compat_str,
)
from ..utils import (
clean_html, clean_html,
compiled_regex_type, compiled_regex_type,
ExtractorError, ExtractorError,
@@ -72,6 +73,7 @@ class InfoExtractor(object):
* acodec Name of the audio codec in use * acodec Name of the audio codec in use
* asr Audio sampling rate in Hertz * asr Audio sampling rate in Hertz
* vbr Average video bitrate in KBit/s * vbr Average video bitrate in KBit/s
* fps Frame rate
* vcodec Name of the video codec in use * vcodec Name of the video codec in use
* container Name of the container format * container Name of the container format
* filesize The number of bytes, if known in advance * filesize The number of bytes, if known in advance
@@ -85,6 +87,11 @@ class InfoExtractor(object):
by this field, regardless of all other values. by this field, regardless of all other values.
-1 for default (order by other properties), -1 for default (order by other properties),
-2 or smaller for less than default. -2 or smaller for less than default.
* language_preference Is this in the correct requested
language?
10 if it's what the URL is about,
-1 for default (don't know),
-10 otherwise, other values reserved for now.
* quality Order number of the video quality of this * quality Order number of the video quality of this
format, irrespective of the file format. format, irrespective of the file format.
-1 for default (order by other properties), -1 for default (order by other properties),
@@ -402,7 +409,7 @@ class InfoExtractor(object):
video_info['title'] = playlist_title video_info['title'] = playlist_title
return video_info return video_info
def _search_regex(self, pattern, string, name, default=_NO_DEFAULT, fatal=True, flags=0): def _search_regex(self, pattern, string, name, default=_NO_DEFAULT, fatal=True, flags=0, group=None):
""" """
Perform a regex search on the given string, using a single or a list of Perform a regex search on the given string, using a single or a list of
patterns returning the first matching group. patterns returning the first matching group.
@@ -423,8 +430,11 @@ class InfoExtractor(object):
_name = name _name = name
if mobj: if mobj:
# return the first matching group if group is None:
return next(g for g in mobj.groups() if g is not None) # return the first matching group
return next(g for g in mobj.groups() if g is not None)
else:
return mobj.group(group)
elif default is not _NO_DEFAULT: elif default is not _NO_DEFAULT:
return default return default
elif fatal: elif fatal:
@@ -434,11 +444,11 @@ class InfoExtractor(object):
'please report this issue on http://yt-dl.org/bug' % _name) 'please report this issue on http://yt-dl.org/bug' % _name)
return None return None
def _html_search_regex(self, pattern, string, name, default=_NO_DEFAULT, fatal=True, flags=0): def _html_search_regex(self, pattern, string, name, default=_NO_DEFAULT, fatal=True, flags=0, group=None):
""" """
Like _search_regex, but strips HTML tags and unescapes entities. Like _search_regex, but strips HTML tags and unescapes entities.
""" """
res = self._search_regex(pattern, string, name, default, fatal, flags) res = self._search_regex(pattern, string, name, default, fatal, flags, group)
if res: if res:
return clean_html(res).strip() return clean_html(res).strip()
else: else:
@@ -532,9 +542,9 @@ class InfoExtractor(object):
display_name = name display_name = name
return self._html_search_regex( return self._html_search_regex(
r'''(?ix)<meta r'''(?ix)<meta
(?=[^>]+(?:itemprop|name|property)=["\']?%s["\']?) (?=[^>]+(?:itemprop|name|property)=(["\']?)%s\1)
[^>]+content=["\']([^"\']+)["\']''' % re.escape(name), [^>]+content=(["\'])(?P<content>.*?)\1''' % re.escape(name),
html, display_name, fatal=fatal, **kwargs) html, display_name, fatal=fatal, group='content', **kwargs)
def _dc_search_uploader(self, html): def _dc_search_uploader(self, html):
return self._html_search_meta('dc.creator', html, 'uploader') return self._html_search_meta('dc.creator', html, 'uploader')
@@ -610,6 +620,7 @@ class InfoExtractor(object):
return ( return (
preference, preference,
f.get('language_preference') if f.get('language_preference') is not None else -1,
f.get('quality') if f.get('quality') is not None else -1, f.get('quality') if f.get('quality') is not None else -1,
f.get('height') if f.get('height') is not None else -1, f.get('height') if f.get('height') is not None else -1,
f.get('width') if f.get('width') is not None else -1, f.get('width') if f.get('width') is not None else -1,
@@ -618,6 +629,7 @@ class InfoExtractor(object):
f.get('vbr') if f.get('vbr') is not None else -1, f.get('vbr') if f.get('vbr') is not None else -1,
f.get('abr') if f.get('abr') is not None else -1, f.get('abr') if f.get('abr') is not None else -1,
audio_ext_preference, audio_ext_preference,
f.get('fps') if f.get('fps') is not None else -1,
f.get('filesize') if f.get('filesize') is not None else -1, f.get('filesize') if f.get('filesize') is not None else -1,
f.get('filesize_approx') if f.get('filesize_approx') is not None else -1, f.get('filesize_approx') if f.get('filesize_approx') is not None else -1,
f.get('source_preference') if f.get('source_preference') is not None else -1, f.get('source_preference') if f.get('source_preference') is not None else -1,
@@ -689,7 +701,10 @@ class InfoExtractor(object):
if re.match(r'^https?://', u) if re.match(r'^https?://', u)
else compat_urlparse.urljoin(m3u8_url, u)) else compat_urlparse.urljoin(m3u8_url, u))
m3u8_doc = self._download_webpage(m3u8_url, video_id) m3u8_doc = self._download_webpage(
m3u8_url, video_id,
note='Downloading m3u8 information',
errnote='Failed to download m3u8 information')
last_info = None last_info = None
kv_rex = re.compile( kv_rex = re.compile(
r'(?P<key>[a-zA-Z_-]+)=(?P<val>"[^"]+"|[^",]+)(?:,|$)') r'(?P<key>[a-zA-Z_-]+)=(?P<val>"[^"]+"|[^",]+)(?:,|$)')

View File

@@ -17,7 +17,6 @@ from ..utils import (
bytes_to_intlist, bytes_to_intlist,
intlist_to_bytes, intlist_to_bytes,
unified_strdate, unified_strdate,
clean_html,
urlencode_postdata, urlencode_postdata,
) )
from ..aes import ( from ..aes import (
@@ -109,19 +108,17 @@ class CrunchyrollIE(SubtitlesInfoExtractor):
decrypted_data = intlist_to_bytes(aes_cbc_decrypt(data, key, iv)) decrypted_data = intlist_to_bytes(aes_cbc_decrypt(data, key, iv))
return zlib.decompress(decrypted_data) return zlib.decompress(decrypted_data)
def _convert_subtitles_to_srt(self, subtitles): def _convert_subtitles_to_srt(self, sub_root):
output = '' output = ''
for i, (start, end, text) in enumerate(re.findall(r'<event [^>]*?start="([^"]+)" [^>]*?end="([^"]+)" [^>]*?text="([^"]+)"[^>]*?>', subtitles), 1):
start = start.replace('.', ',') for i, event in enumerate(sub_root.findall('./events/event'), 1):
end = end.replace('.', ',') start = event.attrib['start'].replace('.', ',')
text = clean_html(text) end = event.attrib['end'].replace('.', ',')
text = text.replace('\\N', '\n') text = event.attrib['text'].replace('\\N', '\n')
if not text:
continue
output += '%d\n%s --> %s\n%s\n\n' % (i, start, end, text) output += '%d\n%s --> %s\n%s\n\n' % (i, start, end, text)
return output return output
def _convert_subtitles_to_ass(self, subtitles): def _convert_subtitles_to_ass(self, sub_root):
output = '' output = ''
def ass_bool(strvalue): def ass_bool(strvalue):
@@ -130,10 +127,6 @@ class CrunchyrollIE(SubtitlesInfoExtractor):
assvalue = '-1' assvalue = '-1'
return assvalue return assvalue
sub_root = xml.etree.ElementTree.fromstring(subtitles)
if not sub_root:
return output
output = '[Script Info]\n' output = '[Script Info]\n'
output += 'Title: %s\n' % sub_root.attrib["title"] output += 'Title: %s\n' % sub_root.attrib["title"]
output += 'ScriptType: v4.00+\n' output += 'ScriptType: v4.00+\n'
@@ -270,10 +263,11 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
lang_code = self._search_regex(r'lang_code=["\']([^"\']+)', subtitle, 'subtitle_lang_code', fatal=False) lang_code = self._search_regex(r'lang_code=["\']([^"\']+)', subtitle, 'subtitle_lang_code', fatal=False)
if not lang_code: if not lang_code:
continue continue
sub_root = xml.etree.ElementTree.fromstring(subtitle)
if sub_format == 'ass': if sub_format == 'ass':
subtitles[lang_code] = self._convert_subtitles_to_ass(subtitle) subtitles[lang_code] = self._convert_subtitles_to_ass(sub_root)
else: else:
subtitles[lang_code] = self._convert_subtitles_to_srt(subtitle) subtitles[lang_code] = self._convert_subtitles_to_srt(sub_root)
if self._downloader.params.get('listsubtitles', False): if self._downloader.params.get('listsubtitles', False):
self._list_available_subtitles(video_id, subtitles) self._list_available_subtitles(video_id, subtitles)

View File

@@ -1,25 +0,0 @@
# encoding: utf-8
from __future__ import unicode_literals
from .canalplus import CanalplusIE
class D8IE(CanalplusIE):
_VALID_URL = r'https?://www\.d8\.tv/.*?/(?P<path>.*)'
_VIDEO_INFO_TEMPLATE = 'http://service.canal-plus.com/video/rest/getVideosLiees/d8/%s'
IE_NAME = 'd8.tv'
_TEST = {
'url': 'http://www.d8.tv/d8-docs-mags/pid6589-d8-campagne-intime.html',
'file': '966289.flv',
'info_dict': {
'title': 'Campagne intime - Documentaire exceptionnel',
'description': 'md5:d2643b799fb190846ae09c61e59a859f',
'upload_date': '20131108',
},
'params': {
# rtmp
'skip_download': True,
},
'skip': 'videos get deleted after a while',
}

View File

@@ -94,7 +94,7 @@ class DailymotionIE(DailymotionBaseInfoExtractor, SubtitlesInfoExtractor):
# It may just embed a vevo video: # It may just embed a vevo video:
m_vevo = re.search( m_vevo = re.search(
r'<link rel="video_src" href="[^"]*?vevo.com[^"]*?videoId=(?P<id>[\w]*)', r'<link rel="video_src" href="[^"]*?vevo.com[^"]*?video=(?P<id>[\w]*)',
webpage) webpage)
if m_vevo is not None: if m_vevo is not None:
vevo_id = m_vevo.group('id') vevo_id = m_vevo.group('id')

View File

@@ -5,7 +5,8 @@ import os.path
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import compat_urllib_parse_unquote, url_basename from ..compat import compat_urllib_parse_unquote
from ..utils import url_basename
class DropboxIE(InfoExtractor): class DropboxIE(InfoExtractor):

View File

@@ -1,7 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .subtitles import SubtitlesInfoExtractor from .subtitles import SubtitlesInfoExtractor
from .common import ExtractorError from .common import ExtractorError
from ..utils import parse_iso8601 from ..utils import parse_iso8601
@@ -25,8 +23,7 @@ class DRTVIE(SubtitlesInfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
programcard = self._download_json( programcard = self._download_json(
'http://www.dr.dk/mu/programcard/expanded/%s' % video_id, video_id, 'Downloading video JSON') 'http://www.dr.dk/mu/programcard/expanded/%s' % video_id, video_id, 'Downloading video JSON')
@@ -35,7 +32,7 @@ class DRTVIE(SubtitlesInfoExtractor):
title = data['Title'] title = data['Title']
description = data['Description'] description = data['Description']
timestamp = parse_iso8601(data['CreatedTime'][:-5]) timestamp = parse_iso8601(data['CreatedTime'])
thumbnail = None thumbnail = None
duration = None duration = None

View File

@@ -20,7 +20,7 @@ class EpornerIE(InfoExtractor):
'display_id': 'Infamous-Tiffany-Teen-Strip-Tease-Video', 'display_id': 'Infamous-Tiffany-Teen-Strip-Tease-Video',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Infamous Tiffany Teen Strip Tease Video', 'title': 'Infamous Tiffany Teen Strip Tease Video',
'duration': 194, 'duration': 1838,
'view_count': int, 'view_count': int,
'age_limit': 18, 'age_limit': 18,
} }
@@ -57,9 +57,7 @@ class EpornerIE(InfoExtractor):
formats.append(fmt) formats.append(fmt)
self._sort_formats(formats) self._sort_formats(formats)
duration = parse_duration(self._search_regex( duration = parse_duration(self._html_search_meta('duration', webpage))
r'class="mbtim">([0-9:]+)</div>', webpage, 'duration',
fatal=False))
view_count = str_to_int(self._search_regex( view_count = str_to_int(self._search_regex(
r'id="cinemaviews">\s*([0-9,]+)\s*<small>views', r'id="cinemaviews">\s*([0-9,]+)\s*<small>views',
webpage, 'view count', fatal=False)) webpage, 'view count', fatal=False))

View File

@@ -5,12 +5,14 @@ import re
import socket import socket
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_http_client, compat_http_client,
compat_str, compat_str,
compat_urllib_error, compat_urllib_error,
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
)
from ..utils import (
urlencode_postdata, urlencode_postdata,
ExtractorError, ExtractorError,
limit_length, limit_length,

View File

@@ -1,49 +1,48 @@
# encoding: utf-8 # encoding: utf-8
import re from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import (
determine_ext,
)
class FazIE(InfoExtractor): class FazIE(InfoExtractor):
IE_NAME = u'faz.net' IE_NAME = 'faz.net'
_VALID_URL = r'https?://www\.faz\.net/multimedia/videos/.*?-(?P<id>\d+)\.html' _VALID_URL = r'https?://www\.faz\.net/multimedia/videos/.*?-(?P<id>\d+)\.html'
_TEST = { _TEST = {
u'url': u'http://www.faz.net/multimedia/videos/stockholm-chemie-nobelpreis-fuer-drei-amerikanische-forscher-12610585.html', 'url': 'http://www.faz.net/multimedia/videos/stockholm-chemie-nobelpreis-fuer-drei-amerikanische-forscher-12610585.html',
u'file': u'12610585.mp4', 'info_dict': {
u'info_dict': { 'id': '12610585',
u'title': u'Stockholm: Chemie-Nobelpreis für drei amerikanische Forscher', 'ext': 'mp4',
u'description': u'md5:1453fbf9a0d041d985a47306192ea253', 'title': 'Stockholm: Chemie-Nobelpreis für drei amerikanische Forscher',
'description': 'md5:1453fbf9a0d041d985a47306192ea253',
}, },
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
self.to_screen(video_id)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
config_xml_url = self._search_regex(r'writeFLV\(\'(.+?)\',', webpage, config_xml_url = self._search_regex(
u'config xml url') r'writeFLV\(\'(.+?)\',', webpage, 'config xml url')
config = self._download_xml(config_xml_url, video_id, config = self._download_xml(
u'Downloading config xml') config_xml_url, video_id, 'Downloading config xml')
encodings = config.find('ENCODINGS') encodings = config.find('ENCODINGS')
formats = [] formats = []
for code in ['LOW', 'HIGH', 'HQ']: for pref, code in enumerate(['LOW', 'HIGH', 'HQ']):
encoding = encodings.find(code) encoding = encodings.find(code)
if encoding is None: if encoding is None:
continue continue
encoding_url = encoding.find('FILENAME').text encoding_url = encoding.find('FILENAME').text
formats.append({ formats.append({
'url': encoding_url, 'url': encoding_url,
'ext': determine_ext(encoding_url),
'format_id': code.lower(), 'format_id': code.lower(),
'quality': pref,
}) })
self._sort_formats(formats)
descr = self._html_search_regex(r'<p class="Content Copy">(.*?)</p>', webpage, u'description') descr = self._html_search_regex(
r'<p class="Content Copy">(.*?)</p>', webpage, 'description', fatal=False)
return { return {
'id': video_id, 'id': video_id,
'title': self._og_search_title(webpage), 'title': self._og_search_title(webpage),

View File

@@ -1,25 +1,27 @@
from __future__ import unicode_literals
import re import re
import random import random
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
determine_ext,
get_element_by_id, get_element_by_id,
clean_html, clean_html,
) )
class FKTVIE(InfoExtractor): class FKTVIE(InfoExtractor):
IE_NAME = u'fernsehkritik.tv' IE_NAME = 'fernsehkritik.tv'
_VALID_URL = r'(?:http://)?(?:www\.)?fernsehkritik\.tv/folge-(?P<ep>[0-9]+)(?:/.*)?' _VALID_URL = r'http://(?:www\.)?fernsehkritik\.tv/folge-(?P<ep>[0-9]+)(?:/.*)?'
_TEST = { _TEST = {
u'url': u'http://fernsehkritik.tv/folge-1', 'url': 'http://fernsehkritik.tv/folge-1',
u'file': u'00011.flv', 'info_dict': {
u'info_dict': { 'id': '00011',
u'title': u'Folge 1 vom 10. April 2007', 'ext': 'flv',
u'description': u'md5:fb4818139c7cfe6907d4b83412a6864f', 'title': 'Folge 1 vom 10. April 2007',
'description': 'md5:fb4818139c7cfe6907d4b83412a6864f',
}, },
} }
@@ -32,7 +34,7 @@ class FKTVIE(InfoExtractor):
start_webpage = self._download_webpage('http://fernsehkritik.tv/folge-%d/Start' % episode, start_webpage = self._download_webpage('http://fernsehkritik.tv/folge-%d/Start' % episode,
episode) episode)
playlist = self._search_regex(r'playlist = (\[.*?\]);', start_webpage, playlist = self._search_regex(r'playlist = (\[.*?\]);', start_webpage,
u'playlist', flags=re.DOTALL) 'playlist', flags=re.DOTALL)
files = json.loads(re.sub('{[^{}]*?}', '{}', playlist)) files = json.loads(re.sub('{[^{}]*?}', '{}', playlist))
# TODO: return a single multipart video # TODO: return a single multipart video
videos = [] videos = []
@@ -42,7 +44,6 @@ class FKTVIE(InfoExtractor):
videos.append({ videos.append({
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'ext': determine_ext(video_url),
'title': clean_html(get_element_by_id('eptitle', start_webpage)), 'title': clean_html(get_element_by_id('eptitle', start_webpage)),
'description': clean_html(get_element_by_id('contentlist', start_webpage)), 'description': clean_html(get_element_by_id('contentlist', start_webpage)),
'thumbnail': video_thumbnail 'thumbnail': video_thumbnail
@@ -51,14 +52,15 @@ class FKTVIE(InfoExtractor):
class FKTVPosteckeIE(InfoExtractor): class FKTVPosteckeIE(InfoExtractor):
IE_NAME = u'fernsehkritik.tv:postecke' IE_NAME = 'fernsehkritik.tv:postecke'
_VALID_URL = r'(?:http://)?(?:www\.)?fernsehkritik\.tv/inline-video/postecke\.php\?(.*&)?ep=(?P<ep>[0-9]+)(&|$)' _VALID_URL = r'http://(?:www\.)?fernsehkritik\.tv/inline-video/postecke\.php\?(.*&)?ep=(?P<ep>[0-9]+)(&|$)'
_TEST = { _TEST = {
u'url': u'http://fernsehkritik.tv/inline-video/postecke.php?iframe=true&width=625&height=440&ep=120', 'url': 'http://fernsehkritik.tv/inline-video/postecke.php?iframe=true&width=625&height=440&ep=120',
u'file': u'0120.flv', 'md5': '262f0adbac80317412f7e57b4808e5c4',
u'md5': u'262f0adbac80317412f7e57b4808e5c4', 'info_dict': {
u'info_dict': { 'id': '0120',
u"title": u"Postecke 120" 'ext': 'flv',
'title': 'Postecke 120',
} }
} }
@@ -71,8 +73,7 @@ class FKTVPosteckeIE(InfoExtractor):
video_url = 'http://dl%d.fernsehkritik.tv/postecke/postecke%d.flv' % (server, episode) video_url = 'http://dl%d.fernsehkritik.tv/postecke/postecke%d.flv' % (server, episode)
video_title = 'Postecke %d' % episode video_title = 'Postecke %d' % episode
return { return {
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'ext': determine_ext(video_url), 'title': video_title,
'title': video_title,
} }

View File

@@ -93,7 +93,6 @@ class FranceTvInfoIE(FranceTVBaseInfoExtractor):
_TESTS = [{ _TESTS = [{
'url': 'http://www.francetvinfo.fr/replay-jt/france-3/soir-3/jt-grand-soir-3-lundi-26-aout-2013_393427.html', 'url': 'http://www.francetvinfo.fr/replay-jt/france-3/soir-3/jt-grand-soir-3-lundi-26-aout-2013_393427.html',
'md5': '9cecf35f99c4079c199e9817882a9a1c',
'info_dict': { 'info_dict': {
'id': '84981923', 'id': '84981923',
'ext': 'flv', 'ext': 'flv',

View File

@@ -0,0 +1,38 @@
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import ExtractorError
class FreeVideoIE(InfoExtractor):
_VALID_URL = r'^http://www.freevideo.cz/vase-videa/(?P<id>[^.]+)\.html(?:$|[?#])'
_TEST = {
'url': 'http://www.freevideo.cz/vase-videa/vysukany-zadecek-22033.html',
'info_dict': {
'id': 'vysukany-zadecek-22033',
'ext': 'mp4',
"title": "vysukany-zadecek-22033",
"age_limit": 18,
},
'skip': 'Blocked outside .cz',
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage, handle = self._download_webpage_handle(url, video_id)
if '//www.czechav.com/' in handle.geturl():
raise ExtractorError(
'Access to freevideo is blocked from your location',
expected=True)
video_url = self._search_regex(
r'\s+url: "(http://[a-z0-9-]+.cdn.freevideo.cz/stream/.*?/video.mp4)"',
webpage, 'video URL')
return {
'id': video_id,
'url': video_url,
'title': video_id,
'age_limit': 18,
}

View File

@@ -8,7 +8,7 @@ from ..utils import ExtractorError
class FunnyOrDieIE(InfoExtractor): class FunnyOrDieIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?funnyordie\.com/(?P<type>embed|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])' _VALID_URL = r'https?://(?:www\.)?funnyordie\.com/(?P<type>embed|articles|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])'
_TESTS = [{ _TESTS = [{
'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version', 'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version',
'md5': 'bcd81e0c4f26189ee09be362ad6e6ba9', 'md5': 'bcd81e0c4f26189ee09be362ad6e6ba9',
@@ -21,7 +21,6 @@ class FunnyOrDieIE(InfoExtractor):
}, },
}, { }, {
'url': 'http://www.funnyordie.com/embed/e402820827', 'url': 'http://www.funnyordie.com/embed/e402820827',
'md5': '29f4c5e5a61ca39dfd7e8348a75d0aad',
'info_dict': { 'info_dict': {
'id': 'e402820827', 'id': 'e402820827',
'ext': 'mp4', 'ext': 'mp4',
@@ -29,6 +28,9 @@ class FunnyOrDieIE(InfoExtractor):
'description': 'Please use this to sell something. www.jonlajoie.com', 'description': 'Please use this to sell something. www.jonlajoie.com',
'thumbnail': 're:^http:.*\.jpg$', 'thumbnail': 're:^http:.*\.jpg$',
}, },
}, {
'url': 'http://www.funnyordie.com/articles/ebf5e34fc8/10-hours-of-walking-in-nyc-as-a-man',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -8,12 +8,11 @@ from ..utils import (
compat_urllib_parse, compat_urllib_parse,
compat_urlparse, compat_urlparse,
unescapeHTML, unescapeHTML,
get_meta_content,
) )
class GameSpotIE(InfoExtractor): class GameSpotIE(InfoExtractor):
_VALID_URL = r'(?:http://)?(?:www\.)?gamespot\.com/.*-(?P<page_id>\d+)/?' _VALID_URL = r'(?:http://)?(?:www\.)?gamespot\.com/.*-(?P<id>\d+)/?'
_TEST = { _TEST = {
'url': 'http://www.gamespot.com/videos/arma-3-community-guide-sitrep-i/2300-6410818/', 'url': 'http://www.gamespot.com/videos/arma-3-community-guide-sitrep-i/2300-6410818/',
'md5': 'b2a30deaa8654fcccd43713a6b6a4825', 'md5': 'b2a30deaa8654fcccd43713a6b6a4825',
@@ -26,10 +25,10 @@ class GameSpotIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) page_id = self._match_id(url)
page_id = mobj.group('page_id')
webpage = self._download_webpage(url, page_id) webpage = self._download_webpage(url, page_id)
data_video_json = self._search_regex(r'data-video=["\'](.*?)["\']', webpage, 'data video') data_video_json = self._search_regex(
r'data-video=["\'](.*?)["\']', webpage, 'data video')
data_video = json.loads(unescapeHTML(data_video_json)) data_video = json.loads(unescapeHTML(data_video_json))
# Transform the manifest url to a link to the mp4 files # Transform the manifest url to a link to the mp4 files
@@ -41,7 +40,8 @@ class GameSpotIE(InfoExtractor):
http_path = f4m_path[1:].split('/', 1)[1] http_path = f4m_path[1:].split('/', 1)[1]
http_template = re.sub(QUALITIES_RE, r'%s', http_path) http_template = re.sub(QUALITIES_RE, r'%s', http_path)
http_template = http_template.replace('.csmil/manifest.f4m', '') http_template = http_template.replace('.csmil/manifest.f4m', '')
http_template = compat_urlparse.urljoin('http://video.gamespotcdn.com/', http_template) http_template = compat_urlparse.urljoin(
'http://video.gamespotcdn.com/', http_template)
formats = [] formats = []
for q in qualities: for q in qualities:
formats.append({ formats.append({
@@ -52,8 +52,9 @@ class GameSpotIE(InfoExtractor):
return { return {
'id': data_video['guid'], 'id': data_video['guid'],
'display_id': page_id,
'title': compat_urllib_parse.unquote(data_video['title']), 'title': compat_urllib_parse.unquote(data_video['title']),
'formats': formats, 'formats': formats,
'description': get_meta_content('description', webpage), 'description': self._html_search_meta('description', webpage),
'thumbnail': self._og_search_thumbnail(webpage), 'thumbnail': self._og_search_thumbnail(webpage),
} }

View File

@@ -7,11 +7,12 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from .youtube import YoutubeIE from .youtube import YoutubeIE
from ..utils import ( from ..compat import (
compat_urllib_parse, compat_urllib_parse,
compat_urlparse, compat_urlparse,
compat_xml_parse_error, compat_xml_parse_error,
)
from ..utils import (
determine_ext, determine_ext,
ExtractorError, ExtractorError,
float_or_none, float_or_none,
@@ -99,6 +100,22 @@ class GenericIE(InfoExtractor):
'uploader': 'Championat', 'uploader': 'Championat',
}, },
}, },
{
# https://github.com/rg3/youtube-dl/issues/3541
'add_ie': ['Brightcove'],
'url': 'http://www.kijk.nl/sbs6/leermijvrouwenkennen/videos/jqMiXKAYan2S/aflevering-1',
'info_dict': {
'id': '3866516442001',
'ext': 'mp4',
'title': 'Leer mij vrouwen kennen: Aflevering 1',
'description': 'Leer mij vrouwen kennen: Aflevering 1',
'uploader': 'SBS Broadcasting',
},
'skip': 'Restricted to Netherlands',
'params': {
'skip_download': True, # m3u8 download
},
},
# Direct link to a video # Direct link to a video
{ {
'url': 'http://media.w3.org/2010/05/sintel/trailer.mp4', 'url': 'http://media.w3.org/2010/05/sintel/trailer.mp4',
@@ -325,7 +342,7 @@ class GenericIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'age_limit': 18, 'age_limit': 18,
'uploader': 'www.handjobhub.com', 'uploader': 'www.handjobhub.com',
'title': 'Busty Blonde Siri Tit Fuck While Wank at Handjob Hub', 'title': 'Busty Blonde Siri Tit Fuck While Wank at HandjobHub.com',
} }
}, },
# RSS feed # RSS feed
@@ -389,7 +406,44 @@ class GenericIE(InfoExtractor):
'title': 'Conversation about Hexagonal Rails Part 1 - ThoughtWorks', 'title': 'Conversation about Hexagonal Rails Part 1 - ThoughtWorks',
'duration': 1715.0, 'duration': 1715.0,
'uploader': 'thoughtworks.wistia.com', 'uploader': 'thoughtworks.wistia.com',
}, },
},
# Direct download with broken HEAD
{
'url': 'http://ai-radio.org:8000/radio.opus',
'info_dict': {
'id': 'radio',
'ext': 'opus',
'title': 'radio',
},
'params': {
'skip_download': True, # infinite live stream
},
'expected_warnings': [
r'501.*Not Implemented'
],
},
# Soundcloud embed
{
'url': 'http://nakedsecurity.sophos.com/2014/10/29/sscc-171-are-you-sure-that-1234-is-a-bad-password-podcast/',
'info_dict': {
'id': '174391317',
'ext': 'mp3',
'description': 'md5:ff867d6b555488ad3c52572bb33d432c',
'uploader': 'Sophos Security',
'title': 'Chet Chat 171 - Oct 29, 2014',
'upload_date': '20141029',
}
},
# Livestream embed
{
'url': 'http://www.esa.int/Our_Activities/Space_Science/Rosetta/Philae_comet_touch-down_webcast',
'info_dict': {
'id': '67864563',
'ext': 'flv',
'upload_date': '20141112',
'title': 'Rosetta #CometLanding webcast HL 10',
}
}, },
] ]
@@ -532,6 +586,7 @@ class GenericIE(InfoExtractor):
return { return {
'id': video_id, 'id': video_id,
'title': os.path.splitext(url_basename(url))[0], 'title': os.path.splitext(url_basename(url))[0],
'direct': True,
'formats': [{ 'formats': [{
'format_id': m.group('format_id'), 'format_id': m.group('format_id'),
'url': url, 'url': url,
@@ -544,7 +599,7 @@ class GenericIE(InfoExtractor):
self._downloader.report_warning('Falling back on generic information extractor.') self._downloader.report_warning('Falling back on generic information extractor.')
if full_response: if full_response:
webpage = _webpage_read_content(url, video_id) webpage = self._webpage_read_content(full_response, url, video_id)
else: else:
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
self.report_extraction(video_id) self.report_extraction(video_id)
@@ -823,7 +878,7 @@ class GenericIE(InfoExtractor):
# Look for embeded soundcloud player # Look for embeded soundcloud player
mobj = re.search( mobj = re.search(
r'<iframe src="(?P<url>https?://(?:w\.)?soundcloud\.com/player[^"]+)"', r'<iframe\s+(?:[a-zA-Z0-9_-]+="[^"]+"\s+)*src="(?P<url>https?://(?:w\.)?soundcloud\.com/player[^"]+)"',
webpage) webpage)
if mobj is not None: if mobj is not None:
url = unescapeHTML(mobj.group('url')) url = unescapeHTML(mobj.group('url'))
@@ -860,7 +915,7 @@ class GenericIE(InfoExtractor):
return self.url_result(mobj.group('url'), 'SBS') return self.url_result(mobj.group('url'), 'SBS')
mobj = re.search( mobj = re.search(
r'<iframe[^>]+?src=(["\'])(?P<url>https?://m\.mlb\.com/shared/video/embed/embed\.html\?.+?)\1', r'<iframe[^>]+?src=(["\'])(?P<url>https?://m(?:lb)?\.mlb\.com/shared/video/embed/embed\.html\?.+?)\1',
webpage) webpage)
if mobj is not None: if mobj is not None:
return self.url_result(mobj.group('url'), 'MLB') return self.url_result(mobj.group('url'), 'MLB')
@@ -871,6 +926,12 @@ class GenericIE(InfoExtractor):
if mobj is not None: if mobj is not None:
return self.url_result(self._proto_relative_url(mobj.group('url'), scheme='http:'), 'CondeNast') return self.url_result(self._proto_relative_url(mobj.group('url'), scheme='http:'), 'CondeNast')
mobj = re.search(
r'<iframe[^>]+src="(?P<url>https?://new\.livestream\.com/[^"]+/player[^"]+)"',
webpage)
if mobj is not None:
return self.url_result(mobj.group('url'), 'Livestream')
def check_video(vurl): def check_video(vurl):
vpath = compat_urlparse.urlparse(vurl).path vpath = compat_urlparse.urlparse(vurl).path
vext = determine_ext(vpath) vext = determine_ext(vpath)
@@ -918,7 +979,7 @@ class GenericIE(InfoExtractor):
found = filter_video(re.findall(r'<meta.*?property="og:video".*?content="(.*?)"', webpage)) found = filter_video(re.findall(r'<meta.*?property="og:video".*?content="(.*?)"', webpage))
if not found: if not found:
# HTML5 video # HTML5 video
found = re.findall(r'(?s)<video[^<]*(?:>.*?<source[^>]+)? src="([^"]+)"', webpage) found = re.findall(r'(?s)<video[^<]*(?:>.*?<source[^>]*)?\s+src="([^"]+)"', webpage)
if not found: if not found:
found = re.search( found = re.search(
r'(?i)<meta\s+(?=(?:[a-z-]+="[^"]+"\s+)*http-equiv="refresh")' r'(?i)<meta\s+(?=(?:[a-z-]+="[^"]+"\s+)*http-equiv="refresh")'

View File

@@ -5,13 +5,15 @@ import random
import math import math
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
ExtractorError,
float_or_none,
compat_str, compat_str,
compat_chr, compat_chr,
compat_ord, compat_ord,
) )
from ..utils import (
ExtractorError,
float_or_none,
)
class GloboIE(InfoExtractor): class GloboIE(InfoExtractor):

View File

@@ -0,0 +1,59 @@
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..utils import (
parse_duration,
int_or_none,
)
class GoldenMoustacheIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?goldenmoustache\.com/(?P<display_id>[\w-]+)-(?P<id>\d+)'
_TESTS = [{
'url': 'http://www.goldenmoustache.com/suricate-le-poker-3700/',
'md5': '0f904432fa07da5054d6c8beb5efb51a',
'info_dict': {
'id': '3700',
'ext': 'mp4',
'title': 'Suricate - Le Poker',
'description': 'md5:3d1f242f44f8c8cb0a106f1fd08e5dc9',
'thumbnail': 're:^https?://.*\.jpg$',
'view_count': int,
}
}, {
'url': 'http://www.goldenmoustache.com/le-lab-tout-effacer-mc-fly-et-carlito-55249/',
'md5': '27f0c50fb4dd5f01dc9082fc67cd5700',
'info_dict': {
'id': '55249',
'ext': 'mp4',
'title': 'Le LAB - Tout Effacer (Mc Fly et Carlito)',
'description': 'md5:9b7fbf11023fb2250bd4b185e3de3b2a',
'thumbnail': 're:^https?://.*\.(?:png|jpg)$',
'view_count': int,
}
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_url = self._html_search_regex(
r'data-src-type="mp4" data-src="([^"]+)"', webpage, 'video URL')
title = self._html_search_regex(
r'<title>(.*?)(?: - Golden Moustache)?</title>', webpage, 'title')
thumbnail = self._og_search_thumbnail(webpage)
description = self._og_search_description(webpage)
view_count = int_or_none(self._html_search_regex(
r'<strong>([0-9]+)</strong>\s*VUES</span>',
webpage, 'view count', fatal=False))
return {
'id': video_id,
'url': video_url,
'ext': 'mp4',
'title': title,
'description': description,
'thumbnail': thumbnail,
'view_count': view_count,
}

View File

@@ -46,9 +46,9 @@ class GorillaVidIE(InfoExtractor):
'info_dict': { 'info_dict': {
'id': '3rso4kdn6f9m', 'id': '3rso4kdn6f9m',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Micro Pig piglets ready on 16th July 2009', 'title': 'Micro Pig piglets ready on 16th July 2009-bG0PdrCdxUc',
'thumbnail': 're:http://.*\.jpg', 'thumbnail': 're:http://.*\.jpg',
}, }
}, { }, {
'url': 'http://movpod.in/0wguyyxi1yca', 'url': 'http://movpod.in/0wguyyxi1yca',
'only_matching': True, 'only_matching': True,

View File

@@ -1,15 +1,11 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
compat_urlparse, compat_urlparse,
str_to_int,
ExtractorError, ExtractorError,
) )
import json
class GoshgayIE(InfoExtractor): class GoshgayIE(InfoExtractor):
@@ -27,36 +23,27 @@ class GoshgayIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
title = self._search_regex(r'class="video-title"><h1>(.+?)<', webpage, 'title') title = self._og_search_title(webpage)
thumbnail = self._og_search_thumbnail(webpage)
family_friendly = self._html_search_meta(
'isFamilyFriendly', webpage, default='false')
config_url = self._search_regex(
r"'config'\s*:\s*'([^']+)'", webpage, 'config URL')
player_config = self._search_regex( config = self._download_xml(
r'(?s)jwplayer\("player"\)\.setup\(({.+?})\)', webpage, 'config settings') config_url, video_id, 'Downloading player config XML')
player_vars = json.loads(player_config.replace("'", '"'))
width = str_to_int(player_vars.get('width'))
height = str_to_int(player_vars.get('height'))
config_uri = player_vars.get('config')
if config_uri is None: if config is None:
raise ExtractorError('Missing config URI')
node = self._download_xml(config_uri, video_id, 'Downloading player config XML',
errnote='Unable to download XML')
if node is None:
raise ExtractorError('Missing config XML') raise ExtractorError('Missing config XML')
if node.tag != 'config': if config.tag != 'config':
raise ExtractorError('Missing config attribute') raise ExtractorError('Missing config attribute')
fns = node.findall('file') fns = config.findall('file')
imgs = node.findall('image') if len(fns) < 1:
if len(fns) != 1:
raise ExtractorError('Missing media URI') raise ExtractorError('Missing media URI')
video_url = fns[0].text video_url = fns[0].text
if len(imgs) < 1:
thumbnail = None
else:
thumbnail = imgs[0].text
url_comp = compat_urlparse.urlparse(url) url_comp = compat_urlparse.urlparse(url)
ref = "%s://%s%s" % (url_comp[0], url_comp[1], url_comp[2]) ref = "%s://%s%s" % (url_comp[0], url_comp[1], url_comp[2])
@@ -65,9 +52,7 @@ class GoshgayIE(InfoExtractor):
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'title': title, 'title': title,
'width': width,
'height': height,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
'http_referer': ref, 'http_referer': ref,
'age_limit': 18, 'age_limit': 0 if family_friendly == 'true' else 18,
} }

View File

@@ -8,12 +8,13 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ExtractorError, compat_urllib_request, compat_html_parser from ..compat import (
compat_html_parser,
from ..utils import (
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request,
compat_urlparse, compat_urlparse,
) )
from ..utils import ExtractorError
class GroovesharkHtmlParser(compat_html_parser.HTMLParser): class GroovesharkHtmlParser(compat_html_parser.HTMLParser):

View File

@@ -3,7 +3,8 @@ from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
get_meta_content, determine_ext,
int_or_none,
parse_iso8601, parse_iso8601,
) )
@@ -24,57 +25,54 @@ class HeiseIE(InfoExtractor):
'title': ( 'title': (
"Podcast: c't uplink 3.3 Owncloud / Tastaturen / Peilsender Smartphone" "Podcast: c't uplink 3.3 Owncloud / Tastaturen / Peilsender Smartphone"
), ),
'format_id': 'mp4_720', 'format_id': 'mp4_720p',
'timestamp': 1411812600, 'timestamp': 1411812600,
'upload_date': '20140927', 'upload_date': '20140927',
'description': 'In uplink-Episode 3.3 geht es darum, wie man sich von Cloud-Anbietern emanzipieren kann, worauf man beim Kauf einer Tastatur achten sollte und was Smartphones über uns verraten.', 'description': 'In uplink-Episode 3.3 geht es darum, wie man sich von Cloud-Anbietern emanzipieren kann, worauf man beim Kauf einer Tastatur achten sollte und was Smartphones über uns verraten.',
'thumbnail': 're:^https?://.*\.jpe?g$',
} }
} }
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
json_url = self._search_regex(
r'json_url:\s*"([^"]+)"', webpage, 'json URL') container_id = self._search_regex(
config = self._download_json(json_url, video_id) r'<div class="videoplayerjw".*?data-container="([0-9]+)"',
webpage, 'container ID')
sequenz_id = self._search_regex(
r'<div class="videoplayerjw".*?data-sequenz="([0-9]+)"',
webpage, 'sequenz ID')
data_url = 'http://www.heise.de/videout/feed?container=%s&sequenz=%s' % (container_id, sequenz_id)
doc = self._download_xml(data_url, video_id)
info = { info = {
'id': video_id, 'id': video_id,
'thumbnail': config.get('poster'), 'thumbnail': self._og_search_thumbnail(webpage),
'timestamp': parse_iso8601(get_meta_content('date', webpage)), 'timestamp': parse_iso8601(
self._html_search_meta('date', webpage)),
'description': self._og_search_description(webpage), 'description': self._og_search_description(webpage),
} }
title = get_meta_content('fulltitle', webpage) title = self._html_search_meta('fulltitle', webpage)
if title: if title:
info['title'] = title info['title'] = title
elif config.get('title'):
info['title'] = config['title']
else: else:
info['title'] = self._og_search_title(webpage) info['title'] = self._og_search_title(webpage)
formats = [] formats = []
for t, rs in config['formats'].items(): for source_node in doc.findall('.//{http://rss.jwpcdn.com/}source'):
if not rs or not hasattr(rs, 'items'): label = source_node.attrib['label']
self._downloader.report_warning( height = int_or_none(self._search_regex(
'formats: {0}: no resolutions'.format(t)) r'^(.*?_)?([0-9]+)p$', label, 'height', default=None))
continue video_url = source_node.attrib['file']
ext = determine_ext(video_url, '')
for height_str, obj in rs.items(): formats.append({
format_id = '{0}_{1}'.format(t, height_str) 'url': video_url,
'format_note': label,
if not obj or not obj.get('url'): 'format_id': '%s_%s' % (ext, label),
self._downloader.report_warning( 'height': height,
'formats: {0}: no url'.format(format_id)) })
continue
formats.append({
'url': obj['url'],
'format_id': format_id,
'height': self._int(height_str, 'height'),
})
self._sort_formats(formats) self._sort_formats(formats)
info['formats'] = formats info['formats'] = formats

View File

@@ -1,7 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -20,13 +18,11 @@ class IconosquareIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
html_title = self._html_search_regex( title = self._html_search_regex(
r'<title>(.+?)</title>', r'<title>(.+?)(?: *\(Videos?\))? \| (?:Iconosquare|Statigram)</title>',
webpage, 'title') webpage, 'title')
title = re.sub(r'(?: *\(Videos?\))? \| (?:Iconosquare|Statigram)$', '', html_title)
uploader_id = self._html_search_regex( uploader_id = self._html_search_regex(
r'@([^ ]+)', title, 'uploader name', fatal=False) r'@([^ ]+)', title, 'uploader name', fatal=False)

View File

@@ -6,7 +6,6 @@ import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
compat_urlparse, compat_urlparse,
get_element_by_attribute,
) )
@@ -27,10 +26,11 @@ class ImdbIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage('http://www.imdb.com/video/imdb/vi%s' % video_id, video_id) webpage = self._download_webpage('http://www.imdb.com/video/imdb/vi%s' % video_id, video_id)
descr = get_element_by_attribute('itemprop', 'description', webpage) descr = self._html_search_regex(
r'(?s)<span itemprop="description">(.*?)</span>',
webpage, 'description', fatal=False)
available_formats = re.findall( available_formats = re.findall(
r'case \'(?P<f_id>.*?)\' :$\s+url = \'(?P<path>.*?)\'', webpage, r'case \'(?P<f_id>.*?)\' :$\s+url = \'(?P<path>.*?)\'', webpage,
flags=re.MULTILINE) flags=re.MULTILINE)
@@ -73,9 +73,7 @@ class ImdbListIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) list_id = self._match_id(url)
list_id = mobj.group('id')
webpage = self._download_webpage(url, list_id) webpage = self._download_webpage(url, list_id)
entries = [ entries = [
self.url_result('http://www.imdb.com' + m, 'Imdb') self.url_result('http://www.imdb.com' + m, 'Imdb')

View File

@@ -5,11 +5,11 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
get_element_by_id,
parse_iso8601,
determine_ext, determine_ext,
int_or_none,
float_or_none, float_or_none,
get_element_by_id,
int_or_none,
parse_iso8601,
str_to_int, str_to_int,
) )
@@ -30,7 +30,7 @@ class IzleseneIE(InfoExtractor):
'description': 'md5:253753e2655dde93f59f74b572454f6d', 'description': 'md5:253753e2655dde93f59f74b572454f6d',
'thumbnail': 're:^http://.*\.jpg', 'thumbnail': 're:^http://.*\.jpg',
'uploader_id': 'pelikzzle', 'uploader_id': 'pelikzzle',
'timestamp': 1404298698, 'timestamp': 1404302298,
'upload_date': '20140702', 'upload_date': '20140702',
'duration': 95.395, 'duration': 95.395,
'age_limit': 0, 'age_limit': 0,
@@ -46,7 +46,7 @@ class IzleseneIE(InfoExtractor):
'description': 'Tarkan Dortmund 2006 Konseri', 'description': 'Tarkan Dortmund 2006 Konseri',
'thumbnail': 're:^http://.*\.jpg', 'thumbnail': 're:^http://.*\.jpg',
'uploader_id': 'parlayankiz', 'uploader_id': 'parlayankiz',
'timestamp': 1163318593, 'timestamp': 1163322193,
'upload_date': '20061112', 'upload_date': '20061112',
'duration': 253.666, 'duration': 253.666,
'age_limit': 0, 'age_limit': 0,
@@ -55,10 +55,9 @@ class IzleseneIE(InfoExtractor):
] ]
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
url = 'http://www.izlesene.com/video/%s' % video_id
url = 'http://www.izlesene.com/video/%s' % video_id
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
title = self._og_search_title(webpage) title = self._og_search_title(webpage)

View File

@@ -1,8 +1,6 @@
# encoding: utf-8 # encoding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -21,22 +19,17 @@ class KickStarterIE(InfoExtractor):
}, { }, {
'note': 'Embedded video (not using the native kickstarter video service)', 'note': 'Embedded video (not using the native kickstarter video service)',
'url': 'https://www.kickstarter.com/projects/597507018/pebble-e-paper-watch-for-iphone-and-android/posts/659178', 'url': 'https://www.kickstarter.com/projects/597507018/pebble-e-paper-watch-for-iphone-and-android/posts/659178',
'playlist': [ 'info_dict': {
{ 'id': '78704821',
'info_dict': { 'ext': 'mp4',
'id': '78704821', 'uploader_id': 'pebble',
'ext': 'mp4', 'uploader': 'Pebble Technology',
'uploader_id': 'pebble', 'title': 'Pebble iOS Notifications',
'uploader': 'Pebble Technology', }
'title': 'Pebble iOS Notifications',
}
}
],
}] }]
def _real_extract(self, url): def _real_extract(self, url):
m = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = m.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
title = self._html_search_regex( title = self._html_search_regex(

View File

@@ -1,7 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -18,11 +16,11 @@ class Ku6IE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
title = self._search_regex(r'<h1 title=.*>(.*?)</h1>', webpage, 'title')
title = self._html_search_regex(
r'<h1 title=.*>(.*?)</h1>', webpage, 'title')
dataUrl = 'http://v.ku6.com/fetchVideo4Player/%s.html' % video_id dataUrl = 'http://v.ku6.com/fetchVideo4Player/%s.html' % video_id
jsonData = self._download_json(dataUrl, video_id) jsonData = self._download_json(dataUrl, video_id)
downloadUrl = jsonData['data']['f'] downloadUrl = jsonData['data']['f']

View File

@@ -0,0 +1,78 @@
from __future__ import unicode_literals
import random
import re
from .common import InfoExtractor
from ..utils import ExtractorError
class Laola1TvIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?laola1\.tv/(?P<lang>[a-z]+)-(?P<portal>[a-z]+)/.*?/(?P<id>[0-9]+)\.html'
_TEST = {
'url': 'http://www.laola1.tv/de-de/live/bwf-bitburger-open-grand-prix-gold-court-1/250019.html',
'info_dict': {
'id': '250019',
'ext': 'mp4',
'title': 'Bitburger Open Grand Prix Gold - Court 1',
'categories': ['Badminton'],
'uploader': 'BWF - Badminton World Federation',
'is_live': True,
},
'params': {
'skip_download': True,
}
}
_BROKEN = True # Not really - extractor works fine, but f4m downloader does not support live streams yet.
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
lang = mobj.group('lang')
portal = mobj.group('portal')
webpage = self._download_webpage(url, video_id)
iframe_url = self._search_regex(
r'<iframe[^>]*?class="main_tv_player"[^>]*?src="([^"]+)"',
webpage, 'iframe URL')
iframe = self._download_webpage(
iframe_url, video_id, note='Downloading iframe')
flashvars_m = re.findall(
r'flashvars\.([_a-zA-Z0-9]+)\s*=\s*"([^"]*)";', iframe)
flashvars = dict((m[0], m[1]) for m in flashvars_m)
xml_url = ('http://www.laola1.tv/server/hd_video.php?' +
'play=%s&partner=1&portal=%s&v5ident=&lang=%s' % (
video_id, portal, lang))
hd_doc = self._download_xml(xml_url, video_id)
title = hd_doc.find('.//video/title').text
flash_url = hd_doc.find('.//video/url').text
categories = hd_doc.find('.//video/meta_sports').text.split(',')
uploader = hd_doc.find('.//video/meta_organistation').text
ident = random.randint(10000000, 99999999)
token_url = '%s&ident=%s&klub=0&unikey=0&timestamp=%s&auth=%s' % (
flash_url, ident, flashvars['timestamp'], flashvars['auth'])
token_doc = self._download_xml(
token_url, video_id, note='Downloading token')
token_attrib = token_doc.find('.//token').attrib
if token_attrib.get('auth') == 'blocked':
raise ExtractorError('Token error: ' % token_attrib.get('comment'))
video_url = '%s?hdnea=%s&hdcore=3.2.0' % (
token_attrib['url'], token_attrib['auth'])
return {
'id': video_id,
'is_live': True,
'title': title,
'url': video_url,
'uploader': uploader,
'categories': categories,
'ext': 'mp4',
}

View File

@@ -18,7 +18,7 @@ from ..utils import (
class LivestreamIE(InfoExtractor): class LivestreamIE(InfoExtractor):
IE_NAME = 'livestream' IE_NAME = 'livestream'
_VALID_URL = r'http://new\.livestream\.com/.*?/(?P<event_name>.*?)(/videos/(?P<id>\d+))?/?$' _VALID_URL = r'https?://new\.livestream\.com/.*?/(?P<event_name>.*?)(/videos/(?P<id>[0-9]+)(?:/player)?)?/?(?:$|[?#])'
_TESTS = [{ _TESTS = [{
'url': 'http://new.livestream.com/CoheedandCambria/WebsterHall/videos/4719370', 'url': 'http://new.livestream.com/CoheedandCambria/WebsterHall/videos/4719370',
'md5': '53274c76ba7754fb0e8d072716f2292b', 'md5': '53274c76ba7754fb0e8d072716f2292b',
@@ -37,6 +37,9 @@ class LivestreamIE(InfoExtractor):
'title': 'TEDCity2.0 (English)', 'title': 'TEDCity2.0 (English)',
}, },
'playlist_mincount': 4, 'playlist_mincount': 4,
}, {
'url': 'https://new.livestream.com/accounts/362/events/3557232/videos/67864563/player?autoPlay=false&height=360&mute=false&width=640',
'only_matching': True,
}] }]
def _parse_smil(self, video_id, smil_url): def _parse_smil(self, video_id, smil_url):
@@ -190,7 +193,8 @@ class LivestreamOriginalIE(InfoExtractor):
'id': video_id, 'id': video_id,
'title': item.find('title').text, 'title': item.find('title').text,
'url': 'rtmp://extondemand.livestream.com/ondemand', 'url': 'rtmp://extondemand.livestream.com/ondemand',
'play_path': 'mp4:trans/dv15/mogulus-{0}.mp4'.format(path), 'play_path': 'trans/dv15/mogulus-{0}'.format(path),
'player_url': 'http://static.livestream.com/chromelessPlayer/v21/playerapi.swf?hash=5uetk&v=0803&classid=D27CDB6E-AE6D-11cf-96B8-444553540000&jsEnabled=false&wmode=opaque',
'ext': 'flv', 'ext': 'flv',
'thumbnail': thumbnail_url, 'thumbnail': thumbnail_url,
} }

View File

@@ -32,9 +32,7 @@ class LRTIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
title = remove_end(self._og_search_title(webpage), ' - LRT') title = remove_end(self._og_search_title(webpage), ' - LRT')

View File

@@ -16,7 +16,7 @@ class MailRuIE(InfoExtractor):
'url': 'http://my.mail.ru/video/top#video=/mail/sonypicturesrus/75/76', 'url': 'http://my.mail.ru/video/top#video=/mail/sonypicturesrus/75/76',
'md5': 'dea205f03120046894db4ebb6159879a', 'md5': 'dea205f03120046894db4ebb6159879a',
'info_dict': { 'info_dict': {
'id': '46301138', 'id': '46301138_76',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Новый Человек-Паук. Высокое напряжение. Восстание Электро', 'title': 'Новый Человек-Паук. Высокое напряжение. Восстание Электро',
'timestamp': 1393232740, 'timestamp': 1393232740,
@@ -30,7 +30,7 @@ class MailRuIE(InfoExtractor):
'url': 'http://my.mail.ru/corp/hitech/video/news_hi-tech_mail_ru/1263.html', 'url': 'http://my.mail.ru/corp/hitech/video/news_hi-tech_mail_ru/1263.html',
'md5': '00a91a58c3402204dcced523777b475f', 'md5': '00a91a58c3402204dcced523777b475f',
'info_dict': { 'info_dict': {
'id': '46843144', 'id': '46843144_1263',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Samsung Galaxy S5 Hammer Smash Fail Battery Explosion', 'title': 'Samsung Galaxy S5 Hammer Smash Fail Battery Explosion',
'timestamp': 1397217632, 'timestamp': 1397217632,
@@ -54,33 +54,36 @@ class MailRuIE(InfoExtractor):
author = video_data['author'] author = video_data['author']
uploader = author['name'] uploader = author['name']
uploader_id = author['id'] uploader_id = author.get('id') or author.get('email')
view_count = video_data.get('views_count')
movie = video_data['movie'] meta_data = video_data['meta']
content_id = str(movie['contentId']) content_id = '%s_%s' % (
title = movie['title'] meta_data.get('accId', ''), meta_data['itemId'])
title = meta_data['title']
if title.endswith('.mp4'): if title.endswith('.mp4'):
title = title[:-4] title = title[:-4]
thumbnail = movie['poster'] thumbnail = meta_data['poster']
duration = movie['duration'] duration = meta_data['duration']
timestamp = meta_data['timestamp']
view_count = video_data['views_count']
formats = [ formats = [
{ {
'url': video['url'], 'url': video['url'],
'format_id': video['name'], 'format_id': video['key'],
'height': int(video['key'].rstrip('p'))
} for video in video_data['videos'] } for video in video_data['videos']
] ]
self._sort_formats(formats)
return { return {
'id': content_id, 'id': content_id,
'title': title, 'title': title,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
'timestamp': video_data['timestamp'], 'timestamp': timestamp,
'uploader': uploader, 'uploader': uploader,
'uploader_id': uploader_id, 'uploader_id': uploader_id,
'duration': duration, 'duration': duration,
'view_count': view_count, 'view_count': view_count,
'formats': formats, 'formats': formats,
} }

View File

@@ -10,7 +10,7 @@ from ..utils import (
class MLBIE(InfoExtractor): class MLBIE(InfoExtractor):
_VALID_URL = r'https?://m\.mlb\.com/(?:(?:.*?/)?video/(?:topic/[\da-z_-]+/)?v|shared/video/embed/embed\.html\?.*?\bcontent_id=)(?P<id>n?\d+)' _VALID_URL = r'https?://m(?:lb)?\.mlb\.com/(?:(?:.*?/)?video/(?:topic/[\da-z_-]+/)?v|(?:shared/video/embed/embed\.html|[^/]+/video/play\.jsp)\?.*?\bcontent_id=)(?P<id>n?\d+)'
_TESTS = [ _TESTS = [
{ {
'url': 'http://m.mlb.com/sea/video/topic/51231442/v34698933/nymsea-ackley-robs-a-home-run-with-an-amazing-catch/?c_id=sea', 'url': 'http://m.mlb.com/sea/video/topic/51231442/v34698933/nymsea-ackley-robs-a-home-run-with-an-amazing-catch/?c_id=sea',
@@ -72,6 +72,14 @@ class MLBIE(InfoExtractor):
'url': 'http://m.mlb.com/shared/video/embed/embed.html?content_id=35692085&topic_id=6479266&width=400&height=224&property=mlb', 'url': 'http://m.mlb.com/shared/video/embed/embed.html?content_id=35692085&topic_id=6479266&width=400&height=224&property=mlb',
'only_matching': True, 'only_matching': True,
}, },
{
'url': 'http://mlb.mlb.com/shared/video/embed/embed.html?content_id=36599553',
'only_matching': True,
},
{
'url': 'http://mlb.mlb.com/es/video/play.jsp?content_id=36599553',
'only_matching': True,
},
] ]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -33,7 +33,7 @@ class MTVServicesInfoExtractor(InfoExtractor):
m = re.match(r'^rtmpe?://.*?/(?P<finalid>gsp\..+?/.*)$', rtmp_video_url) m = re.match(r'^rtmpe?://.*?/(?P<finalid>gsp\..+?/.*)$', rtmp_video_url)
if not m: if not m:
return rtmp_video_url return rtmp_video_url
base = 'http://mtvnmobile.vo.llnwd.net/kip0/_pxn=1+_pxI0=Ripod-h264+_pxL0=undefined+_pxM0=+_pxK=18639+_pxE=mp4/44620/mtvnorigin/' base = 'http://viacommtvstrmfs.fplive.net/'
return base + m.group('finalid') return base + m.group('finalid')
def _get_feed_url(self, uri): def _get_feed_url(self, uri):
@@ -186,7 +186,8 @@ class MTVServicesEmbeddedIE(MTVServicesInfoExtractor):
def _get_feed_url(self, uri): def _get_feed_url(self, uri):
video_id = self._id_from_uri(uri) video_id = self._id_from_uri(uri)
site_id = uri.replace(video_id, '') site_id = uri.replace(video_id, '')
config_url = 'http://media.mtvnservices.com/pmt/e1/players/{0}/config.xml'.format(site_id) config_url = ('http://media.mtvnservices.com/pmt/e1/players/{0}/'
'context4/context5/config.xml'.format(site_id))
config_doc = self._download_xml(config_url, video_id) config_doc = self._download_xml(config_url, video_id)
feed_node = config_doc.find('.//feed') feed_node = config_doc.find('.//feed')
feed_url = feed_node.text.strip().split('?')[0] feed_url = feed_node.text.strip().split('?')[0]

View File

@@ -13,9 +13,10 @@ class MySpassIE(InfoExtractor):
_VALID_URL = r'http://www\.myspass\.de/.*' _VALID_URL = r'http://www\.myspass\.de/.*'
_TEST = { _TEST = {
'url': 'http://www.myspass.de/myspass/shows/tvshows/absolute-mehrheit/Absolute-Mehrheit-vom-17022013-Die-Highlights-Teil-2--/11741/', 'url': 'http://www.myspass.de/myspass/shows/tvshows/absolute-mehrheit/Absolute-Mehrheit-vom-17022013-Die-Highlights-Teil-2--/11741/',
'file': '11741.mp4',
'md5': '0b49f4844a068f8b33f4b7c88405862b', 'md5': '0b49f4844a068f8b33f4b7c88405862b',
'info_dict': { 'info_dict': {
'id': '11741',
'ext': 'mp4',
"description": "Wer kann in die Fu\u00dfstapfen von Wolfgang Kubicki treten und die Mehrheit der Zuschauer hinter sich versammeln? Wird vielleicht sogar die Absolute Mehrheit geknackt und der Jackpot von 200.000 Euro mit nach Hause genommen?", "description": "Wer kann in die Fu\u00dfstapfen von Wolfgang Kubicki treten und die Mehrheit der Zuschauer hinter sich versammeln? Wird vielleicht sogar die Absolute Mehrheit geknackt und der Jackpot von 200.000 Euro mit nach Hause genommen?",
"title": "Absolute Mehrheit vom 17.02.2013 - Die Highlights, Teil 2", "title": "Absolute Mehrheit vom 17.02.2013 - Die Highlights, Teil 2",
}, },

View File

@@ -7,11 +7,12 @@ import re
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_ord, compat_ord,
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
)
from ..utils import (
ExtractorError, ExtractorError,
) )

View File

@@ -7,6 +7,7 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
compat_urllib_parse, compat_urllib_parse,
ExtractorError, ExtractorError,
clean_html,
) )
@@ -31,6 +32,11 @@ class NaverIE(InfoExtractor):
m_id = re.search(r'var rmcPlayer = new nhn.rmcnmv.RMCVideoPlayer\("(.+?)", "(.+?)"', m_id = re.search(r'var rmcPlayer = new nhn.rmcnmv.RMCVideoPlayer\("(.+?)", "(.+?)"',
webpage) webpage)
if m_id is None: if m_id is None:
m_error = re.search(
r'(?s)<div class="nation_error">\s*(?:<!--.*?-->)?\s*<p class="[^"]+">(?P<msg>.+?)</p>\s*</div>',
webpage)
if m_error:
raise ExtractorError(clean_html(m_error.group('msg')), expected=True)
raise ExtractorError('couldn\'t extract vid and key') raise ExtractorError('couldn\'t extract vid and key')
vid = m_id.group(1) vid = m_id.group(1)
key = m_id.group(2) key = m_id.group(2)

View File

@@ -26,8 +26,7 @@ class NBCIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
theplatform_url = self._search_regex('class="video-player video-player-full" data-mpx-url="(.*?)"', webpage, 'theplatform url') theplatform_url = self._search_regex('class="video-player video-player-full" data-mpx-url="(.*?)"', webpage, 'theplatform url')
if theplatform_url.startswith('//'): if theplatform_url.startswith('//'):
@@ -57,7 +56,7 @@ class NBCNewsIE(InfoExtractor):
'md5': 'b2421750c9f260783721d898f4c42063', 'md5': 'b2421750c9f260783721d898f4c42063',
'info_dict': { 'info_dict': {
'id': 'I1wpAI_zmhsQ', 'id': 'I1wpAI_zmhsQ',
'ext': 'flv', 'ext': 'mp4',
'title': 'How Twitter Reacted To The Snowden Interview', 'title': 'How Twitter Reacted To The Snowden Interview',
'description': 'md5:65a0bd5d76fe114f3c2727aa3a81fe64', 'description': 'md5:65a0bd5d76fe114f3c2727aa3a81fe64',
}, },
@@ -97,6 +96,8 @@ class NBCNewsIE(InfoExtractor):
] ]
for base_url in base_urls: for base_url in base_urls:
if not base_url:
continue
playlist_url = base_url + '?form=MPXNBCNewsAPI' playlist_url = base_url + '?form=MPXNBCNewsAPI'
all_videos = self._download_json(playlist_url, title)['videos'] all_videos = self._download_json(playlist_url, title)['videos']

View File

@@ -67,7 +67,7 @@ class NDRIE(InfoExtractor):
thumbnail = None thumbnail = None
video_url = re.search(r'''3: \{src:'(?P<video>.+?)\.hi\.mp4', type:"video/mp4"},''', page) video_url = re.search(r'''3: \{src:'(?P<video>.+?)\.(lo|hi|hq)\.mp4', type:"video/mp4"},''', page)
if video_url: if video_url:
thumbnails = re.findall(r'''\d+: \{src: "([^"]+)"(?: \|\| '[^']+')?, quality: '([^']+)'}''', page) thumbnails = re.findall(r'''\d+: \{src: "([^"]+)"(?: \|\| '[^']+')?, quality: '([^']+)'}''', page)
if thumbnails: if thumbnails:

View File

@@ -7,7 +7,6 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
compat_urlparse, compat_urlparse,
compat_urllib_parse, compat_urllib_parse,
determine_ext,
unified_strdate, unified_strdate,
) )

View File

@@ -2,6 +2,7 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
@@ -11,6 +12,7 @@ from ..utils import (
unified_strdate, unified_strdate,
parse_duration, parse_duration,
int_or_none, int_or_none,
ExtractorError,
) )
@@ -107,6 +109,9 @@ class NiconicoIE(InfoExtractor):
flv_info_request, video_id, flv_info_request, video_id,
note='Downloading flv info', errnote='Unable to download flv info') note='Downloading flv info', errnote='Unable to download flv info')
if 'deleted=' in flv_info_webpage:
raise ExtractorError('The video has been deleted.',
expected=True)
video_real_url = compat_urlparse.parse_qs(flv_info_webpage)['url'][0] video_real_url = compat_urlparse.parse_qs(flv_info_webpage)['url'][0]
# Start extracting information # Start extracting information
@@ -146,3 +151,37 @@ class NiconicoIE(InfoExtractor):
'duration': duration, 'duration': duration,
'webpage_url': webpage_url, 'webpage_url': webpage_url,
} }
class NiconicoPlaylistIE(InfoExtractor):
_VALID_URL = r'https?://www\.nicovideo\.jp/mylist/(?P<id>\d+)'
_TEST = {
'url': 'http://www.nicovideo.jp/mylist/27411728',
'info_dict': {
'id': '27411728',
'title': 'AKB48のオールナイトニッポン',
},
'playlist_mincount': 225,
}
def _real_extract(self, url):
list_id = self._match_id(url)
webpage = self._download_webpage(url, list_id)
entries_json = self._search_regex(r'Mylist\.preload\(\d+, (\[.*\])\);',
webpage, 'entries')
entries = json.loads(entries_json)
entries = [{
'_type': 'url',
'ie_key': NiconicoIE.ie_key(),
'url': ('http://www.nicovideo.jp/watch/%s' %
entry['item_data']['video_id']),
} for entry in entries]
return {
'_type': 'playlist',
'title': self._search_regex(r'\s+name: "(.*?)"', webpage, 'title'),
'id': list_id,
'entries': entries,
}

View File

@@ -7,6 +7,7 @@ from ..utils import (
unified_strdate, unified_strdate,
parse_duration, parse_duration,
qualities, qualities,
strip_jsonp,
url_basename, url_basename,
) )
@@ -63,7 +64,7 @@ class NPOIE(InfoExtractor):
'http://e.omroep.nl/metadata/aflevering/%s' % video_id, 'http://e.omroep.nl/metadata/aflevering/%s' % video_id,
video_id, video_id,
# We have to remove the javascript callback # We have to remove the javascript callback
transform_source=lambda j: re.sub(r'parseMetadata\((.*?)\);\n//.*$', r'\1', j) transform_source=strip_jsonp,
) )
token_page = self._download_webpage( token_page = self._download_webpage(
'http://ida.omroep.nl/npoplayer/i.js', 'http://ida.omroep.nl/npoplayer/i.js',

View File

@@ -0,0 +1,31 @@
from __future__ import unicode_literals
from .common import InfoExtractor
from .zdf import extract_from_xml_url
class PhoenixIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?phoenix\.de/content/(?P<id>[0-9]+)'
_TEST = {
'url': 'http://www.phoenix.de/content/884301',
'md5': 'ed249f045256150c92e72dbb70eadec6',
'info_dict': {
'id': '884301',
'ext': 'mp4',
'title': 'Michael Krons mit Hans-Werner Sinn',
'description': 'Im Dialog - Sa. 25.10.14, 00.00 - 00.35 Uhr',
'upload_date': '20141025',
'uploader': 'Im Dialog',
}
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
internal_id = self._search_regex(
r'<div class="phx_vod" id="phx_vod_([0-9]+)"',
webpage, 'internal video ID')
api_url = 'http://www.phoenix.de/php/zdfplayer-v1.3/data/beitragsDetails.php?ak=web&id=%s' % internal_id
return extract_from_xml_url(self, video_id, api_url)

View File

@@ -6,6 +6,7 @@ import os.path
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError,
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
) )
@@ -29,6 +30,12 @@ class PlayedIE(InfoExtractor):
video_id = self._match_id(url) video_id = self._match_id(url)
orig_webpage = self._download_webpage(url, video_id) orig_webpage = self._download_webpage(url, video_id)
m_error = re.search(
r'(?s)Reason for deletion:.*?<b class="err"[^>]*>(?P<msg>[^<]+)</b>', orig_webpage)
if m_error:
raise ExtractorError(m_error.group('msg'), expected=True)
fields = re.findall( fields = re.findall(
r'type="hidden" name="([^"]+)"\s+value="([^"]+)">', orig_webpage) r'type="hidden" name="([^"]+)"\s+value="([^"]+)">', orig_webpage)
data = dict(fields) data = dict(fields)

View File

@@ -16,13 +16,14 @@ from ..aes import (
class PornHubIE(InfoExtractor): class PornHubIE(InfoExtractor):
_VALID_URL = r'^(?:https?://)?(?:www\.)?(?P<url>pornhub\.com/view_video\.php\?viewkey=(?P<videoid>[0-9a-f]+))' _VALID_URL = r'^https?://(?:www\.)?pornhub\.com/view_video\.php\?viewkey=(?P<id>[0-9a-f]+)'
_TEST = { _TEST = {
'url': 'http://www.pornhub.com/view_video.php?viewkey=648719015', 'url': 'http://www.pornhub.com/view_video.php?viewkey=648719015',
'file': '648719015.mp4',
'md5': '882f488fa1f0026f023f33576004a2ed', 'md5': '882f488fa1f0026f023f33576004a2ed',
'info_dict': { 'info_dict': {
"uploader": "BABES-COM", 'id': '648719015',
'ext': 'mp4',
"uploader": "Babes",
"title": "Seductive Indian beauty strips down and fingers her pink pussy", "title": "Seductive Indian beauty strips down and fingers her pink pussy",
"age_limit": 18 "age_limit": 18
} }
@@ -35,9 +36,7 @@ class PornHubIE(InfoExtractor):
return count return count
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('videoid')
url = 'http://www.' + mobj.group('url')
req = compat_urllib_request.Request(url) req = compat_urllib_request.Request(url)
req.add_header('Cookie', 'age_verified=1') req.add_header('Cookie', 'age_verified=1')
@@ -45,7 +44,7 @@ class PornHubIE(InfoExtractor):
video_title = self._html_search_regex(r'<h1 [^>]+>([^<]+)', webpage, 'title') video_title = self._html_search_regex(r'<h1 [^>]+>([^<]+)', webpage, 'title')
video_uploader = self._html_search_regex( video_uploader = self._html_search_regex(
r'(?s)From:&nbsp;.+?<(?:a href="/users/|<span class="username)[^>]+>(.+?)<', r'(?s)From:&nbsp;.+?<(?:a href="/users/|a href="/channels/|<span class="username)[^>]+>(.+?)<',
webpage, 'uploader', fatal=False) webpage, 'uploader', fatal=False)
thumbnail = self._html_search_regex(r'"image_url":"([^"]+)', webpage, 'thumbnail', fatal=False) thumbnail = self._html_search_regex(r'"image_url":"([^"]+)', webpage, 'thumbnail', fatal=False)
if thumbnail: if thumbnail:

View File

@@ -14,7 +14,6 @@ from ..utils import (
class PromptFileIE(InfoExtractor): class PromptFileIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?promptfile\.com/l/(?P<id>[0-9A-Z\-]+)' _VALID_URL = r'https?://(?:www\.)?promptfile\.com/l/(?P<id>[0-9A-Z\-]+)'
_FILE_NOT_FOUND_REGEX = r'<div.+id="not_found_msg".+>.+</div>[^-]'
_TEST = { _TEST = {
'url': 'http://www.promptfile.com/l/D21B4746E9-F01462F0FF', 'url': 'http://www.promptfile.com/l/D21B4746E9-F01462F0FF',
'md5': 'd1451b6302da7215485837aaea882c4c', 'md5': 'd1451b6302da7215485837aaea882c4c',
@@ -27,11 +26,10 @@ class PromptFileIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
if re.search(self._FILE_NOT_FOUND_REGEX, webpage) is not None: if re.search(r'<div.+id="not_found_msg".+>(?!We are).+</div>[^-]', webpage) is not None:
raise ExtractorError('Video %s does not exist' % video_id, raise ExtractorError('Video %s does not exist' % video_id,
expected=True) expected=True)

View File

@@ -0,0 +1,51 @@
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..utils import (
compat_urlparse,
determine_ext,
int_or_none,
)
class QuickVidIE(InfoExtractor):
_VALID_URL = r'https?://(www\.)?quickvid\.org/watch\.php\?v=(?P<id>[a-zA-Z_0-9-]+)'
_TEST = {
'url': 'http://quickvid.org/watch.php?v=sUQT3RCG8dx',
'md5': 'c0c72dd473f260c06c808a05d19acdc5',
'info_dict': {
'id': 'sUQT3RCG8dx',
'ext': 'mp4',
'title': 'Nick Offerman\'s Summer Reading Recap',
'thumbnail': 're:^https?://.*\.(?:png|jpg|gif)$',
'view_count': int,
},
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
title = self._html_search_regex(r'<h2>(.*?)</h2>', webpage, 'title')
view_count = int_or_none(self._html_search_regex(
r'(?s)<div id="views">(.*?)</div>',
webpage, 'view count', fatal=False))
video_code = self._search_regex(
r'(?s)<video id="video"[^>]*>(.*?)</video>', webpage, 'video code')
formats = [
{
'url': compat_urlparse.urljoin(url, src),
'format_id': determine_ext(src, None),
} for src in re.findall('<source\s+src="([^"]+)"', video_code)
]
self._sort_formats(formats)
return {
'id': video_id,
'title': title,
'formats': formats,
'thumbnail': self._og_search_thumbnail(webpage),
'view_count': view_count,
}

View File

@@ -1,43 +1,43 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import compat_urllib_parse_unquote
clean_html,
compat_parse_qs,
)
class Ro220IE(InfoExtractor): class Ro220IE(InfoExtractor):
IE_NAME = '220.ro' IE_NAME = '220.ro'
_VALID_URL = r'(?x)(?:https?://)?(?:www\.)?220\.ro/(?P<category>[^/]+)/(?P<shorttitle>[^/]+)/(?P<video_id>[^/]+)' _VALID_URL = r'(?x)(?:https?://)?(?:www\.)?220\.ro/(?P<category>[^/]+)/(?P<shorttitle>[^/]+)/(?P<id>[^/]+)'
_TEST = { _TEST = {
"url": "http://www.220.ro/sport/Luati-Le-Banii-Sez-4-Ep-1/LYV6doKo7f/", 'url': 'http://www.220.ro/sport/Luati-Le-Banii-Sez-4-Ep-1/LYV6doKo7f/',
'file': 'LYV6doKo7f.mp4',
'md5': '03af18b73a07b4088753930db7a34add', 'md5': '03af18b73a07b4088753930db7a34add',
'info_dict': { 'info_dict': {
"title": "Luati-le Banii sez 4 ep 1", 'id': 'LYV6doKo7f',
"description": "re:^Iata-ne reveniti dupa o binemeritata vacanta\. +Va astept si pe Facebook cu pareri si comentarii.$", 'ext': 'mp4',
'title': 'Luati-le Banii sez 4 ep 1',
'description': 're:^Iata-ne reveniti dupa o binemeritata vacanta\. +Va astept si pe Facebook cu pareri si comentarii.$',
} }
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('video_id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
flashVars_str = self._search_regex( url = compat_urllib_parse_unquote(self._search_regex(
r'<param name="flashVars" value="([^"]+)"', r'(?s)clip\s*:\s*{.*?url\s*:\s*\'([^\']+)\'', webpage, 'url'))
webpage, 'flashVars') title = self._og_search_title(webpage)
flashVars = compat_parse_qs(flashVars_str) description = self._og_search_description(webpage)
thumbnail = self._og_search_thumbnail(webpage)
formats = [{
'format_id': 'sd',
'url': url,
'ext': 'mp4',
}]
return { return {
'_type': 'video',
'id': video_id, 'id': video_id,
'ext': 'mp4', 'formats': formats,
'url': flashVars['videoURL'][0], 'title': title,
'title': flashVars['title'][0], 'description': description,
'description': clean_html(flashVars['desc'][0]), 'thumbnail': thumbnail,
'thumbnail': flashVars['preview'][0],
} }

View File

@@ -28,8 +28,9 @@ class RtlXlIE(InfoExtractor):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
uuid = mobj.group('uuid') uuid = mobj.group('uuid')
# Use m3u8 streams (see https://github.com/rg3/youtube-dl/issues/4118)
info = self._download_json( info = self._download_json(
'http://www.rtl.nl/system/s4m/vfd/version=2/uuid=%s/fmt=flash/' % uuid, 'http://www.rtl.nl/system/s4m/vfd/version=2/uuid=%s/d=pc/fmt=adaptive/' % uuid,
uuid) uuid)
material = info['material'][0] material = info['material'][0]
@@ -39,11 +40,11 @@ class RtlXlIE(InfoExtractor):
subtitle = material['title'] or info['episodes'][0]['name'] subtitle = material['title'] or info['episodes'][0]['name']
videopath = material['videopath'] videopath = material['videopath']
f4m_url = 'http://manifest.us.rtl.nl' + videopath m3u8_url = 'http://manifest.us.rtl.nl' + videopath
formats = self._extract_f4m_formats(f4m_url, uuid) formats = self._extract_m3u8_formats(m3u8_url, uuid, ext='mp4')
video_urlpart = videopath.split('/flash/')[1][:-4] video_urlpart = videopath.split('/adaptive/')[1][:-4]
PG_URL_TEMPLATE = 'http://pg.us.rtl.nl/rtlxl/network/%s/progressive/%s.mp4' PG_URL_TEMPLATE = 'http://pg.us.rtl.nl/rtlxl/network/%s/progressive/%s.mp4'
formats.extend([ formats.extend([
@@ -54,9 +55,12 @@ class RtlXlIE(InfoExtractor):
{ {
'url': PG_URL_TEMPLATE % ('a3m', video_urlpart), 'url': PG_URL_TEMPLATE % ('a3m', video_urlpart),
'format_id': 'pg-hd', 'format_id': 'pg-hd',
'quality': 0,
} }
]) ])
self._sort_formats(formats)
return { return {
'id': uuid, 'id': uuid,
'title': '%s - %s' % (progname, subtitle), 'title': '%s - %s' % (progname, subtitle),

View File

@@ -81,7 +81,7 @@ class RTLnowIE(InfoExtractor):
'id': '99205', 'id': '99205',
'ext': 'flv', 'ext': 'flv',
'title': 'Medicopter 117 - Angst!', 'title': 'Medicopter 117 - Angst!',
'description': 'md5:895b1df01639b5f61a04fc305a5cb94d', 'description': 're:^Im Therapiezentrum \'Sonnalm\' kommen durch eine Unachtsamkeit die für die B.handlung mit Phobikern gehaltenen Voglespinnen frei\. Eine Ausreißerin',
'thumbnail': 'http://autoimg.static-fra.de/superrtlnow/287529/1500x1500/image2.jpg', 'thumbnail': 'http://autoimg.static-fra.de/superrtlnow/287529/1500x1500/image2.jpg',
'upload_date': '20080928', 'upload_date': '20080928',
'duration': 2691, 'duration': 2691,

View File

@@ -1,8 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -21,19 +19,20 @@ class RUHDIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
video_url = self._html_search_regex( video_url = self._html_search_regex(
r'<param name="src" value="([^"]+)"', webpage, 'video url') r'<param name="src" value="([^"]+)"', webpage, 'video url')
title = self._html_search_regex( title = self._html_search_regex(
r'<title>([^<]+)&nbsp;&nbsp; RUHD.ru - Видео Высокого качества №1 в России!</title>', webpage, 'title') r'<title>([^<]+)&nbsp;&nbsp; RUHD.ru - Видео Высокого качества №1 в России!</title>',
webpage, 'title')
description = self._html_search_regex( description = self._html_search_regex(
r'(?s)<div id="longdesc">(.+?)<span id="showlink">', webpage, 'description', fatal=False) r'(?s)<div id="longdesc">(.+?)<span id="showlink">',
webpage, 'description', fatal=False)
thumbnail = self._html_search_regex( thumbnail = self._html_search_regex(
r'<param name="previewImage" value="([^"]+)"', webpage, 'thumbnail', fatal=False) r'<param name="previewImage" value="([^"]+)"',
webpage, 'thumbnail', fatal=False)
if thumbnail: if thumbnail:
thumbnail = 'http://www.ruhd.ru' + thumbnail thumbnail = 'http://www.ruhd.ru' + thumbnail

View File

@@ -0,0 +1,61 @@
from __future__ import unicode_literals
import re
from .common import InfoExtractor
class SexuIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?sexu\.com/(?P<id>\d+)'
_TEST = {
'url': 'http://sexu.com/961791/',
'md5': 'ff615aca9691053c94f8f10d96cd7884',
'info_dict': {
'id': '961791',
'ext': 'mp4',
'title': 'md5:4d05a19a5fc049a63dbbaf05fb71d91b',
'description': 'md5:c5ed8625eb386855d5a7967bd7b77a54',
'categories': list, # NSFW
'thumbnail': 're:https?://.*\.jpg$',
'age_limit': 18,
}
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
quality_arr = self._search_regex(
r'sources:\s*\[([^\]]+)\]', webpage, 'forrmat string')
formats = [{
'url': fmt[0].replace('\\', ''),
'format_id': fmt[1],
'height': int(fmt[1][:3]),
} for fmt in re.findall(r'"file":"([^"]+)","label":"([^"]+)"', quality_arr)]
self._sort_formats(formats)
title = self._html_search_regex(
r'<title>([^<]+)\s*-\s*Sexu\.Com</title>', webpage, 'title')
description = self._html_search_meta(
'description', webpage, 'description')
thumbnail = self._html_search_regex(
r'image:\s*"([^"]+)"',
webpage, 'thumbnail', fatal=False)
categories_str = self._html_search_meta(
'keywords', webpage, 'categories')
categories = (
None if categories_str is None
else categories_str.split(','))
return {
'id': video_id,
'title': title,
'description': description,
'thumbnail': thumbnail,
'categories': categories,
'formats': formats,
'age_limit': 18,
}

Some files were not shown because too many files have changed in this diff Show More