Compare commits

...

252 Commits

Author SHA1 Message Date
Sergey M․
bd1bcd3ea0 release 2016.08.19 2016-08-19 00:15:12 +07:00
Sergey M․
93a63b36f1 [ChangeLog] Actualize 2016-08-19 00:13:24 +07:00
Sergey M․
8b2dc4c328 [options] Remove output template description from --help
Same reasons as for --format
2016-08-18 23:59:13 +07:00
Sergey M․
850837b67a [porncom] Add extractor (Closes #2251, closes #10251) 2016-08-18 23:52:41 +07:00
Sergey M․
13585d7682 [utils] Recognize lowercase units in parse_filesize 2016-08-18 23:32:00 +07:00
Sergey M․
fd3ec986a4 [generic] Fix dbtv test (Closes #10364) 2016-08-18 21:35:41 +07:00
Sergey M․
b0d578ff7b [dbtv] Relax embed regex 2016-08-18 21:30:55 +07:00
Déstin Reed
b0c8f2e9c8 [DBTV:generic] Add support for embeds 2016-08-18 21:29:27 +07:00
Sergey M․
51815886a9 [vk:wallpost] Fix audio extraction 2016-08-18 06:14:05 +07:00
Sergey M․
08a42f9c74 [vk] Fix authentication on python3 2016-08-18 05:22:23 +07:00
Sergey M․
e15ad9ef09 [keezmovies] PEP 8 2016-08-18 04:39:31 +07:00
Sergey M․
4e9fee1015 [hgtvcom:show] Add extractor (Closes #10365) 2016-08-18 04:37:14 +07:00
Remita Amine
7273e5849b [discoverygo] extend _VALID_URL to support other networks 2016-08-17 11:03:09 +01:00
Sergey M․
b505e98784 [extremetube] Revert display_id 2016-08-17 07:02:13 +07:00
Sergey M․
92cd9fd565 [keezmovies] Make display_id optional 2016-08-17 07:01:32 +07:00
Sergey M․
b3d7dce429 release 2016.08.17 2016-08-17 06:21:21 +07:00
Sergey M․
a44694ab4e [ChangeLog] Actualize 2016-08-17 06:19:22 +07:00
Sergey M․
ab19b46b88 [extremetube] Modernize 2016-08-17 06:02:12 +07:00
Sergey M․
8804f10e6b [tube8] Modernize 2016-08-17 05:46:45 +07:00
Sergey M․
6be17c0870 [mofosex] Extract all formats and modernize (Closes #10335) 2016-08-17 05:45:49 +07:00
Sergey M․
8652770bd2 [keezmovies] Improve and modernize 2016-08-17 05:44:46 +07:00
Sergey M․
2a1321a272 [vbox7:generic] Add support for vbox7 embeds 2016-08-17 01:02:59 +07:00
Sergey M․
9c0fa60bf3 [vbox7] Add support for embed URLs 2016-08-17 00:42:02 +07:00
Sergey M․
502d87c546 [mtg] Improve view count extraction 2016-08-17 00:32:28 +07:00
Sergey M․
b35b0d73d8 [viafree] Add extractor (Closes #10358) 2016-08-17 00:21:30 +07:00
Sergey M․
6e7e4a6edf [mtg] Add support for viafree URLs (#10358) 2016-08-17 00:19:43 +07:00
Remita Amine
53fef319f1 [fxnetworks] extend _VALID_URL to support simpsonsworld.com 2016-08-16 16:22:34 +01:00
Remita Amine
2cabee2a7d [amcnetworks] fix typo 2016-08-16 16:22:34 +01:00
Remita Amine
11f502fac1 [theplatform] extract subtitles with multiple formats from the metadata 2016-08-16 16:22:34 +01:00
Sergey M․
98affc1a48 [xvideos] Fix test 2016-08-16 21:20:15 +07:00
Sergey M․
70a2829fee [xvideos] Fix HLS extraction (Closes #10356) 2016-08-16 21:17:52 +07:00
Remita Amine
837e56c8ee [amcnetworks] extract episode metadata 2016-08-16 14:49:32 +01:00
Remita Amine
b5ddee8c77 [amcnetworks] Add new extractor 2016-08-16 13:44:01 +01:00
Sergey M․
fb64adcbd3 [adobepass] PEP 8 2016-08-16 04:45:21 +07:00
Sergey M․
4f640f2890 [bbc:playlist] Fix tests 2016-08-16 04:43:10 +07:00
Sergey M․
254e64a20a [bbc:playlist] Add support for pagination (Closes #10349) 2016-08-16 04:36:23 +07:00
Remita Amine
818ac213eb [adobepass] add IE suffix to the extractor and remove duplicate constant 2016-08-15 21:36:34 +01:00
Remita Amine
cbef4d5c9f [fxnetworks] add test and check geo restriction 2016-08-15 17:10:45 +01:00
Remita Amine
bf90c46790 [fxnetworks] Add new extractor(closes #9462) 2016-08-15 16:34:32 +01:00
Yen Chi Hsuan
69eb4d699f [cbsnews] Remove invalid tests. CBS Live videos gets deleted soon. 2016-08-15 20:29:22 +08:00
Yen Chi Hsuan
6d8ec8c3b7 [ChangeLog] Update for CBSLocal and related changes 2016-08-15 13:39:43 +08:00
Yen Chi Hsuan
760845ce99 [cbslocal] Adapt to SendtoNewsIE 2016-08-15 13:37:37 +08:00
Yen Chi Hsuan
5c2d087221 [sendtonews] Fix extraction 2016-08-15 13:31:08 +08:00
Yen Chi Hsuan
b6c4e36728 [jwplatform] Parse video_id from JWPlayer data
And remove a mysterious comma from 115c65793a
2016-08-15 13:29:01 +08:00
Sergey M․
1a57b8c18c [zippcast] Remove extractor (Closes #10332)
ZippCast is shut down
2016-08-15 08:25:24 +07:00
Remita Amine
24eb13b1c6 [uplynk,viceland] update tests and change uplynk extractors names 2016-08-14 22:45:43 +01:00
Remita Amine
525e0316c0 [adobepass] fix check for pendingLogout errors 2016-08-14 21:25:43 +01:00
Remita Amine
7e60ce9cf7 [adobepass] clear cache in case of pendingLogout errors 2016-08-14 21:24:33 +01:00
Remita Amine
e811bcf8f8 [viceland] raise ExtractorError for errors other than HTTP 400 2016-08-14 20:13:35 +01:00
Remita Amine
6103f59095 [viceland] remove outdated comment 2016-08-14 19:08:35 +01:00
Remita Amine
9fa5789279 [viceland] fix info extraction(closes #8799) 2016-08-14 19:04:23 +01:00
Remita Amine
d2ac04674d [viceland] Add new extractor(#8799) 2016-08-14 18:04:50 +01:00
Remita Amine
1fd6e30988 [adobepass] create separate class for adobe pass authentication 2016-08-14 18:04:50 +01:00
Sergey M․
884cdb6cd9 [life:embed] Improve extraction 2016-08-14 20:49:11 +07:00
Remita Amine
9771b1f901 [theplatform] use _get_netrc_login_info and fix session expiration check(#10345) 2016-08-14 11:55:28 +01:00
Remita Amine
2118fdd1a9 [common] add separate method for getting netrc ligin info 2016-08-14 11:55:28 +01:00
Sergey M․
320d597c21 [vgtv] Detect geo restricted videos (#10348) 2016-08-14 16:25:14 +07:00
Remita Amine
aaf44a2f47 [uplynk] Add new extractor 2016-08-13 22:53:41 +01:00
Yen Chi Hsuan
fafabc0712 Update ChangeLog for #10342
[skip ci]
2016-08-14 02:33:15 +08:00
Yen Chi Hsuan
409760a932 Merge pull request #10342 from muphil/patch-1
[xiami] bug fix for extractor xiami.py
2016-08-14 02:30:50 +08:00
phi
097eba019d bug fix for extractor xiami.py
Before applying this patch, when downloading resources from xiami.com, it crashes with these:
Traceback (most recent call last):
  File "/home/phi/.local/bin/youtube-dl", line 11, in <module>
    sys.exit(main())
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/__init__.py", line 433, in main
    _real_main(argv)
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/__init__.py", line 423, in _real_main
    retcode = ydl.download(all_urls)
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/YoutubeDL.py", line 1786, in download
    url, force_generic_extractor=self.params.get('force_generic_extractor', False))
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/YoutubeDL.py", line 691, in extract_info
    ie_result = ie.extract(url)
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/extractor/common.py", line 347, in extract
    return self._real_extract(url)
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/extractor/xiami.py", line 116, in _real_extract
    return self._extract_tracks(self._match_id(url))[0]
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/extractor/xiami.py", line 43, in _extract_tracks
    '%s/%s%s' % (self._API_BASE_URL, item_id, '/type/%s' % typ if typ else ''), item_id)
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/extractor/common.py", line 562, in _download_json
    json_string, video_id, transform_source=transform_source, fatal=fatal)
  File "/home/phi/.local/lib/python3.5/site-packages/youtube_dl/extractor/common.py", line 568, in _parse_json
    return json.loads(json_string)
  File "/usr/lib/python3.5/json/__init__.py", line 312, in loads
    s.__class__.__name__))
TypeError: the JSON object must be str, not 'NoneType'

This patch solves exactly this problem.
2016-08-14 02:18:59 +08:00
Sergey M․
73a85620ee release 2016.08.13 2016-08-13 23:17:11 +07:00
Sergey M․
a560f28c98 [ChangeLog] Actualize 2016-08-13 23:01:35 +07:00
Sergey M․
5ec5461e1a [pbs] Clarify comment on http formats 2016-08-13 22:50:18 +07:00
Sergey M․
542130a5d9 [pbs] Fix description extraction and update tests 2016-08-13 21:59:29 +07:00
Sergey M․
82997dad57 [franceculture] Fix extraction (Closes #10324) 2016-08-13 21:00:34 +07:00
Sergey M․
647a7bf5e8 [pornotube] Fix extraction (Closes #10322) 2016-08-13 20:49:16 +07:00
Sergey M․
77afa008dd [4tube] Fix metadata extraction (Closes #10321) 2016-08-13 19:55:09 +07:00
Yen Chi Hsuan
db535435b3 [bigflix] Remove an invalid test
There's no video anymore
2016-08-13 18:02:11 +08:00
Sergey M․
c2a453b461 [imgur] Fix width and height extraction (Closes #10325) 2016-08-13 16:46:07 +07:00
Sergey M․
cd29eaab95 [vbox7] Remove unused imports 2016-08-13 16:45:34 +07:00
Yen Chi Hsuan
52aa7e7476 [test_verbose_output] Fix tests under Python 3 2016-08-13 17:36:14 +08:00
Sergey M․
e97c55ee6a [expotv] Improve extraction and update test 2016-08-13 16:29:05 +07:00
Remita Amine
acfccacad5 [downloader/external:curl] Clarify why CurlFD should not capture stderr 2016-08-13 10:26:02 +01:00
Remita Amine
5f2c2b7936 [test_utils] add test for option with not str value 2016-08-13 09:54:12 +01:00
Sergey M․
cb55908e51 [vbox7] Fix extraction (Closes #10309) 2016-08-13 15:47:20 +07:00
Yen Chi Hsuan
e581224843 [tapely] Remove extractor. It's shut down
Closes #10323
2016-08-13 16:32:07 +08:00
Remita Amine
f50365e91c [pbs] add test for videos with undocumented http formats and remove unused import 2016-08-13 09:10:09 +01:00
Sergey M․
c366f8d30a [24video] Add support for me and xxx TLDs 2016-08-13 14:47:51 +07:00
Sergey M․
6a26c5f9d5 [muenchentv] Fix extraction (Closes #10313) 2016-08-13 14:28:44 +07:00
Sergey M․
bd6fb007de [24video] Fix comment count extraction 2016-08-13 14:22:47 +07:00
Sergey M․
b69b2ff736 [sunporno] Add support for embed URLs 2016-08-13 14:13:49 +07:00
Sergey M․
794e5dcd7e [sunporno] Fix metadata extraction (Closes #10316) 2016-08-13 14:09:35 +07:00
Remita Amine
f0d3669437 [hgtv] Add new extractor(closes #3999) 2016-08-12 18:05:49 +01:00
Remita Amine
98e698f1ff [external/curl] respect more downloader options and display progress 2016-08-12 12:30:02 +01:00
Remita Amine
3cddb8d6a7 [pbs] check all http formats and remove unnecessary request
- some of the quality that not reported in the documentation
are available(4500k, 6500k)
- the videoInfo request doesn't work for a long time
2016-08-12 08:38:06 +01:00
Sergey M․
990d533ee4 [crunchyroll] Add support for HLS (Closes #10301) 2016-08-12 00:56:16 +07:00
Sergey M․
b0081562d2 release 2016.08.12 2016-08-12 00:22:22 +07:00
Sergey M․
fff37cfd4f [ChangeLog] Actualize 2016-08-12 00:18:28 +07:00
Sergey M․
a3be69b7f0 [viu] Remove from extractors 2016-08-12 00:14:51 +07:00
Sergey M․
0fd1b1624c [goldenmoustache] Remove extractor (Closes #10298)
Now uses dailymotion
2016-08-11 23:52:17 +07:00
Sergey M․
367976d49f [drtuber] Improve title extraction 2016-08-11 23:47:52 +07:00
Sergey M․
0aef0771f8 [drtuber] Make dislike count optional (Closes #10297) 2016-08-11 23:47:27 +07:00
Sergey M․
0c070681c5 [chirbit] Fix extraction (Closes #10296) 2016-08-11 23:37:56 +07:00
Sergey M․
30b25d382d [francetvinfo] Relax _VALID_URL 2016-08-11 21:42:55 +07:00
Yen Chi Hsuan
e5f878c205 [ChangeLog] Add change log for #10269
[skip ci]
2016-08-11 19:13:41 +08:00
Yen Chi Hsuan
e2e84aed7e Merge branch 'lkho-pr/#10268' 2016-08-11 19:09:18 +08:00
Yen Chi Hsuan
b1927f4e8a [YoutubeDL] Disable newline conversion when writing subtitles
By default io.open() convert all '\n' occurrences to '\r\n' when writing
files. If the content already contains '\r\n', it will be converted to
'\r\r\n', breaking some video players.
2016-08-11 19:04:23 +08:00
Yen Chi Hsuan
3b9323d96e Merge branch 'pr/#10268' of https://github.com/lkho/youtube-dl into lkho-pr/#10268 2016-08-11 19:03:08 +08:00
lkho
7f832413d6 Preserve line endings for downloaded subtitle files 2016-08-10 23:40:50 +08:00
Sergey M․
7f2ed47595 [rtlnl] Relax _VALID_URL (Closes #10282) 2016-08-10 21:07:43 +07:00
Sergey M․
c3fa77bdef [formula1] Relax _VALID_URL (Closes #10283) 2016-08-10 21:00:40 +07:00
Remita Amine
57ce8a6d08 [wat] improve extraction(#10281)
add alternative method to extract http formats
works even if the video is geo-restricted or removed
from public access(most of the cases)
2016-08-10 14:20:28 +01:00
Yen Chi Hsuan
69d8eeeec5 [ctsnews] Fix extraction 2016-08-10 11:38:38 +08:00
Yen Chi Hsuan
81c13222c6 [utils] Recognize more formats in unified_timestamp
Used in CtsNews
2016-08-10 11:37:23 +08:00
Sergey M․
b1ce2ba197 release 2016.08.10 2016-08-10 00:20:44 +07:00
Sergey M․
5c8411e968 [ChangeLog] Actualize 2016-08-10 00:18:28 +07:00
Sergey M․
cc9c8ce5df [devscripts/prepare_manpage] Fix description strings starting with dash (Closes #10273) 2016-08-09 22:24:58 +07:00
Remita Amine
20ef4123b9 [uol] remove unused import 2016-08-09 15:13:15 +01:00
Remita Amine
4e62d26aa2 [uol] Add new extractor(#4263) 2016-08-09 15:09:08 +01:00
Sergey M․
b657816684 Credit @singh-pratyush96 for #10223 2016-08-09 04:04:45 +07:00
Sergey M․
9778b3e7ee Credit @zvonicek for #10242 and #10253 2016-08-09 04:03:52 +07:00
Sergey M․
25dd58ca6a [metadatafromtitle] Remove unused exception class 2016-08-09 04:01:05 +07:00
nyorain
5e42f8a0ad Make --metadata-from-title non fatal
Output a warning if the metadata can't be parsed from the title (and don't write any metadata) instead of raising a critical error.
2016-08-09 03:56:22 +07:00
Sergey M․
1ad6b891b2 Add more checks for --min/max-sleep-interval arguments and use more idiomatic naming 2016-08-09 03:47:56 +07:00
Sergey M․
7aa589a5e1 Fix --min/max-sleep-interval wording 2016-08-09 03:46:52 +07:00
singh-pratyush96
065bc35489 Add --max-sleep-interval (Closes #9930) 2016-08-09 03:32:42 +07:00
Sergey M․
3a380766d1 [rbmaradio] Improve, simplify and extract all formats (Closes #10242) 2016-08-09 02:46:29 +07:00
Petr Zvoníček
affaea0688 [rbmaradio] Fixed extractor 2016-08-09 02:18:33 +07:00
Sergey M․
77426a087b [sonyliv] Improve (Closes #10258) 2016-08-09 02:16:28 +07:00
Sukhbir Singh
8991844ea2 [sonyliv] Add new extractor 2016-08-09 02:09:13 +07:00
Sergey M․
082395d0a0 [extractor/generic] Add proper default to _search_json_ld call 2016-08-08 22:48:33 +07:00
Sergey M․
e8ed7354e6 [flipagram] Add proper default to _search_json_ld call 2016-08-08 22:46:19 +07:00
Sergey M․
1e7f602e2a [condenast] Make _search_json_ld call non fatal 2016-08-08 22:45:49 +07:00
Sergey M․
522f6c066d [bbc] Add proper default to _search_json_ld call 2016-08-08 22:44:36 +07:00
Sergey M․
321b5e082a [extractor/common] Respect default in _search_json_ld 2016-08-08 22:36:18 +07:00
Sergey M․
3711fa1eb2 Revert "[flipagram] Make _search_json_ld non fatal"
This reverts commit d34995a9e3.
2016-08-08 21:49:45 +07:00
Sergey M․
395c74615c Revert "[extractor/generic] Make _search_json_ld non fatal"
This reverts commit 958849275f.
2016-08-08 21:49:27 +07:00
Yen Chi Hsuan
3dc240e8c6 [sohu] Update _TESTS (closes #10260) 2016-08-08 18:48:21 +08:00
Yen Chi Hsuan
a41a6c5094 [chaturbate] Skip the invalid test 2016-08-08 13:06:02 +08:00
Yen Chi Hsuan
d71207121d [biqle] Skip an invalid test 2016-08-08 12:59:55 +08:00
Yen Chi Hsuan
b1c6f21c74 [aparat] Fix extraction 2016-08-08 12:59:07 +08:00
Yen Chi Hsuan
412abb8760 [bilibili] Update _TESTS 2016-08-08 12:57:17 +08:00
Yen Chi Hsuan
f17d5f6d14 [features.aol.com] Fix _TESTS 2016-08-08 12:52:36 +08:00
Remita Amine
6bb801cfaf [cwtv] extract http formats 2016-08-07 22:58:12 +01:00
Sergey M․
de02d1f4e9 [rozhlas] Fix regexes and improve extraction (Closes #10253) 2016-08-08 04:58:02 +07:00
Petr Zvoníček
e1f93a0a76 [rozhlas] Add new extractor 2016-08-08 04:41:45 +07:00
Charlie Le
d21a661bb4 [README.md] Update Options Link
The link references a bad anchor. The updated link now references the correct anchor.
2016-08-08 03:46:42 +07:00
Yen Chi Hsuan
b2bd968f4b [kuwo:singer] Fix extraction 2016-08-07 22:59:34 +08:00
Sergey M․
4a01befb34 release 2016.08.07 2016-08-07 21:12:41 +07:00
Sergey M․
845dfcdc40 [ChangeLog] Actualize 2016-08-07 21:10:48 +07:00
Sergey M․
d92cb46305 [discoverygo] Add extractor (Closes #10245) 2016-08-07 20:57:05 +07:00
Sergey M․
a8795327ca [utils] Add support TV Parental Guidelines ratings in parse_age_limit 2016-08-07 20:45:18 +07:00
Sergey M․
d34995a9e3 [flipagram] Make _search_json_ld non fatal 2016-08-07 19:06:55 +07:00
Sergey M․
958849275f [extractor/generic] Make _search_json_ld non fatal 2016-08-07 19:04:22 +07:00
Sergey M․
998f094452 [bbc] Remove proxy from test 2016-08-07 18:13:05 +07:00
Sergey M․
aaa42cf0cf [bbc] PEP 8 2016-08-07 18:05:13 +07:00
Sergey M․
9fb64c04cd [bbc] Add support for morph embeds (Closes #10239) 2016-08-07 18:01:50 +07:00
Remita Amine
f9622868e7 [bbc] preserve format_id backward compatibility 2016-08-07 11:14:15 +01:00
Remita Amine
37768f9242 [common] correctly lower the preference of m3u8 master manifest format 2016-08-07 10:59:09 +01:00
Sergey M․
a1aadd09a4 [tnaflixnetworkbase] Improve title extraction 2016-08-07 16:00:09 +07:00
Sergey M․
b47a75017b [tnaflix] Fix metadata extraction (Closes #10249) 2016-08-07 16:00:03 +07:00
Remita Amine
e37b54b140 [fox] fix theplatform release url query 2016-08-06 20:53:39 +01:00
Yen Chi Hsuan
c1decda58c [openload] Fix extraction (closes #9706) 2016-08-07 02:44:15 +08:00
Yen Chi Hsuan
d3f8e038fe [utils] Add decode_png for openload (#9706) 2016-08-07 02:42:58 +08:00
Remita Amine
ad152e2d95 [bbc] fix test 2016-08-06 19:36:12 +01:00
Remita Amine
b0af12154e [bbc] reduce requests and improve format_id 2016-08-06 19:24:59 +01:00
Remita Amine
d16b3c6677 [common] extract partOfTVSeries info in json-ld 2016-08-06 18:58:38 +01:00
Remita Amine
c57244cdb1 [common] lower the preference of m3u8 master manifest format 2016-08-06 18:55:05 +01:00
Remita Amine
a7e5f27412 [bbc] improve extraction
- extract f4m and dash formats
- improve format sorting and listing
- improve extraction of articles with `otherSettings.playlist`
2016-08-06 18:48:09 +01:00
Remita Amine
089a40955c [pokemon] improve _VALID_URL 2016-08-06 12:08:14 +01:00
Remita Amine
d73ebac100 [pokemon] Add new extractor(closes #10093) 2016-08-06 11:18:14 +01:00
Remita Amine
e563c0d73b [condenast] fallback to loader.js if video.js fail 2016-08-05 21:01:16 +01:00
Sergey M․
491c42e690 release 2016.08.06 2016-08-06 01:23:48 +07:00
Sergey M․
7f2339c617 [ChangeLog] Actualize 2016-08-06 01:19:47 +07:00
Sergey M․
8122e79fef [gamekings] Remove remnants 2016-08-06 00:12:37 +07:00
Sergey M․
fe3ad1d456 [adultswim] Remove superfluous md5 from test 2016-08-06 00:02:05 +07:00
Sergey M․
038a5e1a65 [adultswim] Add support for trailers (Closes #10235) 2016-08-06 00:00:05 +07:00
Sergey M․
84bc23b41b [archiveorg] PEP 8 2016-08-05 23:16:19 +07:00
Sergey M․
46933a15d6 [extractor/common] Support root JSON-LD lists (Closes #10203) 2016-08-05 23:14:32 +07:00
Sergey M․
3859ebeee6 [tvplay] Capture and output native error message 2016-08-05 22:50:42 +07:00
Remita Amine
d50aca41f8 [archiveorg] improve format extraction(closes #10219) 2016-08-05 16:42:15 +01:00
Remita Amine
0ca057b965 [jwplatform] add support for playlist extraction and relative urls and improve audio detection 2016-08-05 16:42:15 +01:00
Sergey M․
5ca968d0a6 [tvplay] Extract series metadata 2016-08-05 22:37:38 +07:00
Sergey M․
f0d31c624e [tvplay] Add support for subtitles (Closes #10194) 2016-08-05 22:17:32 +07:00
Remita Amine
08c655906c [5min] fix _VALID_URL(closes #10228) 2016-08-05 10:22:33 +01:00
Remita Amine
5a993e1692 [natgeo] fix tests(closes #10229) 2016-08-05 10:13:26 +01:00
Remita Amine
a7d2953073 [extractors] add tvp:embed import 2016-08-05 10:11:59 +01:00
Remita Amine
fdd0b8f8e0 [tvp] extract video id from the webpage(fixes #7799) 2016-08-05 09:44:15 +01:00
Remita Amine
f65dc41b72 [naver] extract upload date 2016-08-05 08:12:25 +01:00
Yen Chi Hsuan
962250f7ea [cbslocal] Fix timestamp parsing (closes #10213) 2016-08-05 11:44:50 +08:00
Yen Chi Hsuan
7dc2a74e0a [utils] Fix unified_timestamp for formats parsed by parsedate_tz() 2016-08-05 11:41:55 +08:00
Remita Amine
b02b960c6b [naver] improve extraction(closes #8096) 2016-08-04 21:42:22 +01:00
Remita Amine
4f427c4be8 [condenast] improve extraction 2016-08-04 18:30:56 +01:00
Sergey M․
8a00ea567b [natgeo:episodeguide] Do not shadow url from outer scope 2016-08-04 23:21:04 +07:00
Remita Amine
8895be01fc [5min] fix _VALID_URL 2016-08-04 16:55:12 +01:00
Remita Amine
52e7fcfeb7 [engadget] Relax _VALID_URL 2016-08-04 16:34:47 +01:00
Remita Amine
2396062c74 [5min] delegate extraction to AolIE
recently the 5min SenseHandler request return
HTTP Error 503: Service Unavailable error
2016-08-04 16:21:27 +01:00
Remita Amine
14704aeff6 [kaltura] remove debugging line 2016-08-04 14:54:34 +01:00
Remita Amine
3c2c3af059 [extractors] change imports for national geographic extractors 2016-08-04 12:20:56 +01:00
Remita Amine
1891ea2d76 [nationalgeographic] Add support for National Geographic Episode Guide 2016-08-04 12:18:10 +01:00
Remita Amine
1094074c04 [kaltura] extract subtitles and reduce requests 2016-08-04 09:39:06 +01:00
Remita Amine
217d5ae013 [vodplatform] Add new extractor 2016-08-04 09:39:06 +01:00
Remita Amine
8b40854529 [common] lower proto_preference of rtsp formats
Most of the time the RtspFD fail to download videos but it report
success of the download with this output:
[mpv] 0 bytes
[download] 100% of 0.00B
2016-08-04 09:39:06 +01:00
Yen Chi Hsuan
6bb0fbf9fb Revert "[README.md] Use full paths for all configuration files (#8863)"
This reverts commit 899d2bea63.
2016-08-04 09:54:28 +08:00
Sergey M․
8d3b226b83 [gamekings] Remove extractor
Now covered by generic jwplayer
2016-08-03 22:06:10 +07:00
Remita Amine
42b7a5afe0 [limelight] extract http formats 2016-08-03 13:12:51 +01:00
Yen Chi Hsuan
899d2bea63 [README.md] Use full paths for all configuration files (#8863) 2016-08-03 11:15:27 +08:00
Sergey M․
9cb0e65d7e [ntvru] Fix extraction 2016-08-02 22:56:48 +07:00
Sergey M․
b070564efb [extractor/common] Support multiple properties in _og_search_property 2016-08-02 22:55:14 +07:00
Philipp Hagemeister
ce28252c48 [options] Add test that checks that --password=secret is hidden in verbose output 2016-08-02 17:03:46 +02:00
Philipp Hagemeister
3aa9a73554 [options] Hide --password=secret in verbose output 2016-08-02 17:03:26 +02:00
Philipp Hagemeister
6a9b3b61ea [comedycentral] Re-add shortnames
In cc99d4f826, the shortname feature got deleted by accident. Re-add it as a separate IE.
2016-08-02 14:02:31 +02:00
Sergey M․
45408eb075 release 2016.08.01 2016-08-01 22:59:23 +07:00
Sergey M․
eafc66855d [ChangeLog] Add recent changes 2016-08-01 22:56:01 +07:00
Sergey M․
e03d3e6453 [cwtv] Add support for cwtvpr.com (Closes #10196) 2016-08-01 22:51:01 +07:00
Remita Amine
a70e45f80a [limelight] keep videos marked as previewStream
e382b953f0 (commitcomment-18472915)
2016-08-01 16:25:41 +01:00
Sergey M․
697655a7c0 [safari] Relax url regexes (Closes #10202) 2016-08-01 21:48:48 +07:00
Remita Amine
e382b953f0 [limelight] skip preview and drm protected videos 2016-08-01 00:33:30 +01:00
Yen Chi Hsuan
116e7e0d04 [bloomberg] Support BPlayer() players (closes #10187) 2016-07-31 14:47:19 +08:00
Sergey M․
cf03e34ad3 [yandexmusic:track] Fix extraction (Closes #10193) 2016-07-31 07:56:18 +07:00
Sergey M․
2903137292 release 2016.07.30 2016-07-30 14:45:07 +07:00
Sergey M․
9361f2169c [ChangeLog] Make extractor improvements' descriptions more concrete 2016-07-30 14:43:28 +07:00
Yen Chi Hsuan
35aa6c538f Add ChangeLog 2016-07-30 12:33:09 +08:00
Sergey M․
fa9f1d16b8 [dailymotion:playlist] Carry long line 2016-07-29 22:47:34 +07:00
Dave
485fedf6fd [dailymotion:playlist] Optimize download archive processing 2016-07-29 22:45:41 +07:00
Jaime Marquínez Ferrándiz
da0baba5c8 [rtve] Fix extraction for some videos
For example http://www.rtve.es/alacarta/videos/documentos-tv/documentos-tv-descredito/3574098/.
2016-07-29 17:20:27 +02:00
Jaime Marquínez Ferrándiz
bb9f3bfedf Revert "[rtve] Fix extraction (#10076)"
This reverts commit c39b2ed990.

Apparently outside of Spain using 'auth/resources' is required (#10097).
2016-07-29 17:14:04 +02:00
Sergey M․
dbc0b39b91 [tv2] Improve extraction 2016-07-29 22:01:34 +07:00
Sergey M․
481c5c5137 [tv2:article] Fix extraction (Closes #10188) 2016-07-29 21:43:17 +07:00
Sergey M․
0cacae2807 [twitch:clips] Sort formats 2016-07-29 09:01:53 +07:00
Sergey M․
d9d56deadf release 2016.07.28 2016-07-28 02:42:57 +07:00
Sergey M․
74ba450a81 [twitch:clips] Fix extraction (Closes #9767) 2016-07-28 22:30:09 +07:00
Sergey M․
db19df6ca0 [extractor/generic] Add test for #10179 2016-07-28 22:20:08 +07:00
Sergey M․
fbdf8d15d1 [soundcloud] Add _extract_urls (#10179) 2016-07-28 22:16:05 +07:00
Sergey M․
94aae01548 [extractor/generic] Extract all soundcloud embeds (Closes #10179) 2016-07-28 22:15:15 +07:00
Sergey M․
39eef54cf0 [ard:mediathek] Skip unavailable test 2016-07-28 21:38:23 +07:00
Sergey M․
05c8268c81 [shared] Modernize and make more robust 2016-07-27 23:39:02 +07:00
Sergey M․
289a16b4f3 [shared] Respect redirect URL (Closes #10170) 2016-07-27 23:28:01 +07:00
Sergey M․
7935926baa [devscripts/show-downloads-statistics] Add support for paging 2016-07-27 00:14:40 +07:00
Sergey M․
dcbb07c35a release 2016.07.26.2 2016-07-26 23:56:53 +07:00
Sergey M․
40090e8d51 [extractor/common] Improve is_suitable
In order to fix breakage introduced by a3aa814b77
2016-07-26 23:54:06 +07:00
Sergey M․
3e050d51d4 [orf:oe1] Relax _VALID_URL 2016-07-26 23:14:04 +07:00
Sergey M․
ced70c8640 [cbc] PEP 8 2016-07-26 23:08:08 +07:00
Sergey M․
9a700deea4 [instagram] Remove duplicate field in test 2016-07-26 23:07:16 +07:00
Sergey M․
dc35ba0eba [mgtv] Fix typo 2016-07-26 23:06:21 +07:00
Sergey M․
88bd486b9a [cbc] Improve extraction for videos embedded with clipId 2016-07-26 22:58:50 +07:00
Sergey M․
7f8b92e3cf [bigflix] Update tests 2016-07-26 21:44:53 +07:00
Yen Chi Hsuan
35f6e0ff36 [mtv.de] Skip 2 geo-restricted tests 2016-07-26 13:19:47 +08:00
Yen Chi Hsuan
326fa4e6e5 [generic] Skip an invalid test 2016-07-26 13:16:04 +08:00
Yen Chi Hsuan
c74299a72c [cmt] Detect unavailable videos and update _TESTS 2016-07-26 13:13:14 +08:00
Yen Chi Hsuan
10a1bb3a78 [mtv] Fix for videos with missing bitrates 2016-07-26 13:12:24 +08:00
Yen Chi Hsuan
4d3e543c73 Update extractors.py 2016-07-26 11:17:28 +08:00
Yen Chi Hsuan
05d1e7aaa9 [generic] Fix an MTV test and another test that breaks nosetests 2016-07-26 11:11:36 +08:00
Yen Chi Hsuan
a3aa814b77 Update _TESTS for MTV sites 2016-07-26 11:10:41 +08:00
Yen Chi Hsuan
5c32a77cad [nextmovie] Remove extractor
This domain name now redirects to mtv.com
2016-07-26 11:08:55 +08:00
Yen Chi Hsuan
14a28e705b [test/test_all_urls] Remove *.cc.com tests 2016-07-26 11:08:09 +08:00
Yen Chi Hsuan
cc99d4f826 [comedycentral] Remove IEs for *.cc.com except tosh.cc.com
All other subdomains now redirects to cc.com/* URLs
2016-07-26 11:06:50 +08:00
Yen Chi Hsuan
712c7530ff [mtv] Extract more metadata and more
1. Remove MTVIggyIE. All www.mtviggy.com URLs now redirects to
   www.mtv.com
2. Fix MTVDEIE
3. Return multiple URLs from _transform_rtmp_url. This is for
   tosh.cc.com
2016-07-26 11:03:43 +08:00
Sergey M․
0a147785e8 [camdemy] Extract duration properly 2016-07-25 23:03:58 +07:00
Sergey M․
59eaf69e33 [camdemy] Fix camdemy 2016-07-25 23:03:43 +07:00
Sergey M․
e8be2943a7 [smotri] Modernize, make more robust and fix tests 2016-07-24 18:38:18 +07:00
129 changed files with 4267 additions and 2489 deletions

View File

@@ -6,8 +6,8 @@
--- ---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2016.07.24*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2016.08.19*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2016.07.24** - [ ] I've **verified** and **I assure** that I'm running youtube-dl **2016.08.19**
### Before submitting an *issue* make sure you have: ### Before submitting an *issue* make sure you have:
- [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
@@ -35,7 +35,7 @@ $ youtube-dl -v <your command line>
[debug] User config: [] [debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2016.07.24 [debug] youtube-dl version 2016.08.19
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}

View File

@@ -179,3 +179,5 @@ Jakub Adam Wieczorek
Aleksandar Topuzović Aleksandar Topuzović
Nehal Patel Nehal Patel
Rob van Bekkum Rob van Bekkum
Petr Zvoníček
Pratyush Singh

View File

@@ -46,7 +46,7 @@ Make sure that someone has not already opened the issue you're trying to open. S
### Why are existing options not enough? ### Why are existing options not enough?
Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#synopsis). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem. Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
### Is there enough context in your bug report? ### Is there enough context in your bug report?

438
ChangeLog Normal file
View File

@@ -0,0 +1,438 @@
version 2016.08.19
Core
- Remove output template description from --help
* Recognize lowercase units in parse_filesize
Extractors
+ [porncom] Add extractor for porn.com (#2251, #10251)
+ [generic] Add support for DBTV embeds
* [vk:wallpost] Fix audio extraction for new site layout
* [vk] Fix authentication
+ [hgtvcom:show] Add extractor for hgtv.com shows (#10365)
+ [discoverygo] Add support for another GO network sites
version 2016.08.17
Core
+ Add _get_netrc_login_info
Extractors
* [mofosex] Extract all formats (#10335)
+ [generic] Add support for vbox7 embeds
+ [vbox7] Add support for embed URLs
+ [viafree] Add extractor (#10358)
+ [mtg] Add support for viafree URLs (#10358)
* [theplatform] Extract all subtitles per language
+ [xvideos] Fix HLS extraction (#10356)
+ [amcnetworks] Add extractor
+ [bbc:playlist] Add support for pagination (#10349)
+ [fxnetworks] Add extractor (#9462)
* [cbslocal] Fix extraction for SendtoNews-based videos
* [sendtonews] Fix extraction
* [jwplatform] Extract video id from JWPlayer data
- [zippcast] Remove extractor (#10332)
+ [viceland] Add extractor (#8799)
+ [adobepass] Add base extractor for Adobe Pass Authentication
* [life:embed] Improve extraction
* [vgtv] Detect geo restricted videos (#10348)
+ [uplynk] Add extractor
* [xiami] Fix extraction (#10342)
version 2016.08.13
Core
* Show progress for curl external downloader
* Forward more options to curl external downloader
Extractors
* [pbs] Fix description extraction
* [franceculture] Fix extraction (#10324)
* [pornotube] Fix extraction (#10322)
* [4tube] Fix metadata extraction (#10321)
* [imgur] Fix width and height extraction (#10325)
* [expotv] Improve extraction
+ [vbox7] Fix extraction (#10309)
- [tapely] Remove extractor (#10323)
* [muenchentv] Fix extraction (#10313)
+ [24video] Add support for .me and .xxx TLDs
* [24video] Fix comment count extraction
* [sunporno] Add support for embed URLs
* [sunporno] Fix metadata extraction (#10316)
+ [hgtv] Add extractor for hgtv.ca (#3999)
- [pbs] Remove request to unavailable API
+ [pbs] Add support for high quality HTTP formats
+ [crunchyroll] Add support for HLS formats (#10301)
version 2016.08.12
Core
* Subtitles are now written as is. Newline conversions are disabled. (#10268)
+ Recognize more formats in unified_timestamp
Extractors
- [goldenmoustache] Remove extractor (#10298)
* [drtuber] Improve title extraction
* [drtuber] Make dislike count optional (#10297)
* [chirbit] Fix extraction (#10296)
* [francetvinfo] Relax URL regular expression
* [rtlnl] Relax URL regular expression (#10282)
* [formula1] Relax URL regular expression (#10283)
* [wat] Improve extraction (#10281)
* [ctsnews] Fix extraction
version 2016.08.10
Core
* Make --metadata-from-title non fatal when title does not match the pattern
* Introduce options for randomized sleep before each download
--min-sleep-interval and --max-sleep-interval (#9930)
* Respect default in _search_json_ld
Extractors
+ [uol] Add extractor for uol.com.br (#4263)
* [rbmaradio] Fix extraction and extract all formats (#10242)
+ [sonyliv] Add extractor for sonyliv.com (#10258)
* [aparat] Fix extraction
* [cwtv] Extract HTTP formats
+ [rozhlas] Add extractor for prehravac.rozhlas.cz (#10253)
* [kuwo:singer] Fix extraction
version 2016.08.07
Core
+ Add support for TV Parental Guidelines ratings in parse_age_limit
+ Add decode_png (#9706)
+ Add support for partOfTVSeries in JSON-LD
* Lower master M3U8 manifest preference for better format sorting
Extractors
+ [discoverygo] Add extractor (#10245)
* [flipagram] Make JSON-LD extraction non fatal
* [generic] Make JSON-LD extraction non fatal
+ [bbc] Add support for morph embeds (#10239)
* [tnaflixnetworkbase] Improve title extraction
* [tnaflix] Fix metadata extraction (#10249)
* [fox] Fix theplatform release URL query
* [openload] Fix extraction (#9706)
* [bbc] Skip duplicate manifest URLs
* [bbc] Improve format code
+ [bbc] Add support for DASH and F4M
* [bbc] Improve format sorting and listing
* [bbc] Improve playlist extraction
+ [pokemon] Add extractor (#10093)
+ [condenast] Add fallback scenario for video info extraction
version 2016.08.06
Core
* Add support for JSON-LD root list entries (#10203)
* Improve unified_timestamp
* Lower preference of RTSP formats in generic sorting
+ Add support for multiple properties in _og_search_property
* Improve password hiding from verbose output
Extractors
+ [adultswim] Add support for trailers (#10235)
* [archiveorg] Improve extraction (#10219)
+ [jwplatform] Add support for playlists
+ [jwplatform] Add support for relative URLs
* [jwplatform] Improve audio detection
+ [tvplay] Capture and output native error message
+ [tvplay] Extract series metadata
+ [tvplay] Add support for subtitles (#10194)
* [tvp] Improve extraction (#7799)
* [cbslocal] Fix timestamp parsing (#10213)
+ [naver] Add support for subtitles (#8096)
* [naver] Improve extraction
* [condenast] Improve extraction
* [engadget] Relax URL regular expression
* [5min] Fix extraction
+ [nationalgeographic] Add support for Episode Guide
+ [kaltura] Add support for subtitles
* [kaltura] Optimize network requests
+ [vodplatform] Add extractor for vod-platform.net
- [gamekings] Remove extractor
* [limelight] Extract HTTP formats
* [ntvru] Fix extraction
+ [comedycentral] Re-add :tds and :thedailyshow shortnames
version 2016.08.01
Fixed/improved extractors
- [yandexmusic:track] Adapt to changes in track location JSON (#10193)
- [bloomberg] Support another form of player (#10187)
- [limelight] Skip DRM protected videos
- [safari] Relax regular expressions for URL matching (#10202)
- [cwtv] Add support for cwtvpr.com (#10196)
version 2016.07.30
Fixed/improved extractors
- [twitch:clips] Sort formats
- [tv2] Use m3u8_native
- [tv2:article] Fix video detection (#10188)
- rtve (#10076)
- [dailymotion:playlist] Optimize download archive processing (#10180)
version 2016.07.28
Fixed/improved extractors
- shared (#10170)
- soundcloud (#10179)
- twitch (#9767)
version 2016.07.26.2
Fixed/improved extractors
- smotri
- camdemy
- mtv
- comedycentral
- cmt
- cbc
- mgtv
- orf
version 2016.07.24
New extractors
- arkena (#8682)
- lcp (#8682)
Fixed/improved extractors
- facebook (#10151)
- dailymail
- telegraaf
- dcn
- onet
- tvp
Miscellaneous
- Support $Time$ in DASH manifests
version 2016.07.22
New extractors
- odatv (#9285)
Fixed/improved extractors
- bbc
- youjizz (#10131)
- youtube (#10140)
- pornhub (#10138)
- eporner (#10139)
version 2016.07.17
New extractors
- nintendo (#9986)
- streamable (#9122)
Fixed/improved extractors
- ard (#10095)
- mtv
- comedycentral (#10101)
- viki (#10098)
- spike (#10106)
Miscellaneous
- Improved twitter player detection (#10090)
version 2016.07.16
New extractors
- ninenow (#5181)
Fixed/improved extractors
- rtve (#10076)
- brightcove
- 3qsdn
- syfy (#9087, #3820, #2388)
- youtube (#10083)
Miscellaneous
- Fix subtitle embedding for video-only and audio-only files (#10081)
version 2016.07.13
New extractors
- rudo
Fixed/improved extractors
- biobiochiletv
- tvplay
- dbtv
- brightcove
- tmz
- youtube (#10059)
- shahid (#10062)
- vk
- ellentv (#10067)
version 2016.07.11
New Extractors
- roosterteeth (#9864)
Fixed/improved extractors
- miomio (#9605)
- vuclip
- youtube
- vidzi (#10058)
version 2016.07.09.2
Fixed/improved extractors
- vimeo (#1638)
- facebook (#10048)
- lynda (#10047)
- animeondemand
Fixed/improved features
- Embedding subtitles no longer throws an error with problematic inputs (#9063)
version 2016.07.09.1
Fixed/improved extractors
- youtube
- ard
- srmediatek (#9373)
version 2016.07.09
New extractors
- Flipagram (#9898)
Fixed/improved extractors
- telecinco
- toutv
- radiocanada
- tweakers (#9516)
- lynda
- nick (#7542)
- polskieradio (#10028)
- le
- facebook (#9851)
- mgtv
- animeondemand (#10031)
Fixed/improved features
- `--postprocessor-args` and `--downloader-args` now accepts non-ASCII inputs
on non-Windows systems
version 2016.07.07
New extractors
- kamcord (#10001)
Fixed/improved extractors
- spiegel (#10018)
- metacafe (#8539, #3253)
- onet (#9950)
- francetv (#9955)
- brightcove (#9965)
- daum (#9972)
version 2016.07.06
Fixed/improved extractors
- youtube (#10007, #10009)
- xuite
- stitcher
- spiegel
- slideshare
- sandia
- rtvnh
- prosiebensat1
- onionstudios
version 2016.07.05
Fixed/improved extractors
- brightcove
- yahoo (#9995)
- pornhub (#9997)
- iqiyi
- kaltura (#5557)
- la7
- Changed features
- Rename --cn-verfication-proxy to --geo-verification-proxy
Miscellaneous
- Add script for displaying downloads statistics
version 2016.07.03.1
Fixed/improved extractors
- theplatform
- aenetworks
- nationalgeographic
- hrti (#9482)
- facebook (#5701)
- buzzfeed (#5701)
- rai (#8617, #9157, #9232, #8552, #8551)
- nationalgeographic (#9991)
- iqiyi
version 2016.07.03
New extractors
- hrti (#9482)
Fixed/improved extractors
- vk (#9981)
- facebook (#9938)
- xtube (#9953, #9961)
version 2016.07.02
New extractors
- fusion (#9958)
Fixed/improved extractors
- twitch (#9975)
- vine (#9970)
- periscope (#9967)
- pornhub (#8696)
version 2016.07.01
New extractors
- 9c9media
- ctvnews (#2156)
- ctv (#4077)
Fixed/Improved extractors
- rds
- meta (#8789)
- pornhub (#9964)
- sixplay (#2183)
New features
- Accept quoted strings across multiple lines (#9940)

View File

@@ -94,7 +94,7 @@ _EXTRACTOR_FILES != find youtube_dl/extractor -iname '*.py' -and -not -iname 'la
youtube_dl/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES) youtube_dl/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
$(PYTHON) devscripts/make_lazy_extractors.py $@ $(PYTHON) devscripts/make_lazy_extractors.py $@
youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish ChangeLog
@tar -czf youtube-dl.tar.gz --transform "s|^|youtube-dl/|" --owner 0 --group 0 \ @tar -czf youtube-dl.tar.gz --transform "s|^|youtube-dl/|" --owner 0 --group 0 \
--exclude '*.DS_Store' \ --exclude '*.DS_Store' \
--exclude '*.kate-swp' \ --exclude '*.kate-swp' \
@@ -107,7 +107,7 @@ youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-
--exclude 'docs/_build' \ --exclude 'docs/_build' \
-- \ -- \
bin devscripts test youtube_dl docs \ bin devscripts test youtube_dl docs \
LICENSE README.md README.txt \ ChangeLog LICENSE README.md README.txt \
Makefile MANIFEST.in youtube-dl.1 youtube-dl.bash-completion \ Makefile MANIFEST.in youtube-dl.1 youtube-dl.bash-completion \
youtube-dl.zsh youtube-dl.fish setup.py \ youtube-dl.zsh youtube-dl.fish setup.py \
youtube-dl youtube-dl

View File

@@ -201,32 +201,8 @@ which means you can modify it, redistribute it or use it however you like.
-a, --batch-file FILE File containing URLs to download ('-' for -a, --batch-file FILE File containing URLs to download ('-' for
stdin) stdin)
--id Use only video ID in file name --id Use only video ID in file name
-o, --output TEMPLATE Output filename template. Use %(title)s to -o, --output TEMPLATE Output filename template, see the "OUTPUT
get the title, %(uploader)s for the TEMPLATE" for all the info
uploader name, %(uploader_id)s for the
uploader nickname if different,
%(autonumber)s to get an automatically
incremented number, %(ext)s for the
filename extension, %(format)s for the
format description (like "22 - 1280x720" or
"HD"), %(format_id)s for the unique id of
the format (like YouTube's itags: "137"),
%(upload_date)s for the upload date
(YYYYMMDD), %(extractor)s for the provider
(youtube, metacafe, etc), %(id)s for the
video id, %(playlist_title)s,
%(playlist_id)s, or %(playlist)s (=title if
present, ID otherwise) for the playlist the
video is in, %(playlist_index)s for the
position in the playlist. %(height)s and
%(width)s for the width and height of the
video format. %(resolution)s for a textual
description of the resolution of the video
format. %% for a literal percent. Use - to
output to stdout. Can also be used to
download to a different directory, for
example with -o '/my/downloads/%(uploader)s
/%(title)s-%(id)s.%(ext)s' .
--autonumber-size NUMBER Specify the number of digits in --autonumber-size NUMBER Specify the number of digits in
%(autonumber)s when it is present in output %(autonumber)s when it is present in output
filename template or --auto-number option filename template or --auto-number option
@@ -330,7 +306,15 @@ which means you can modify it, redistribute it or use it however you like.
bidirectional text support. Requires bidiv bidirectional text support. Requires bidiv
or fribidi executable in PATH or fribidi executable in PATH
--sleep-interval SECONDS Number of seconds to sleep before each --sleep-interval SECONDS Number of seconds to sleep before each
download. download when used alone or a lower bound
of a range for randomized sleep before each
download (minimum possible number of
seconds to sleep) when used along with
--max-sleep-interval.
--max-sleep-interval SECONDS Upper bound of a range for randomized sleep
before each download (maximum possible
number of seconds to sleep). Must only be
used along with --min-sleep-interval.
## Video Format Options: ## Video Format Options:
-f, --format FORMAT Video format code, see the "FORMAT -f, --format FORMAT Video format code, see the "FORMAT
@@ -1196,7 +1180,7 @@ Make sure that someone has not already opened the issue you're trying to open. S
### Why are existing options not enough? ### Why are existing options not enough?
Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#synopsis). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem. Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
### Is there enough context in your bug report? ### Is there enough context in your bug report?

View File

@@ -54,17 +54,21 @@ def filter_options(readme):
if in_options: if in_options:
if line.lstrip().startswith('-'): if line.lstrip().startswith('-'):
option, description = re.split(r'\s{2,}', line.lstrip()) split = re.split(r'\s{2,}', line.lstrip())
split_option = option.split(' ') # Description string may start with `-` as well. If there is
# only one piece then it's a description bit not an option.
if len(split) > 1:
option, description = split
split_option = option.split(' ')
if not split_option[-1].startswith('-'): # metavar if not split_option[-1].startswith('-'): # metavar
option = ' '.join(split_option[:-1] + ['*%s*' % split_option[-1]]) option = ' '.join(split_option[:-1] + ['*%s*' % split_option[-1]])
# Pandoc's definition_lists. See http://pandoc.org/README.html # Pandoc's definition_lists. See http://pandoc.org/README.html
# for more information. # for more information.
ret += '\n%s\n: %s\n' % (option, description) ret += '\n%s\n: %s\n' % (option, description)
else: continue
ret += line.lstrip() + '\n' ret += line.lstrip() + '\n'
else: else:
ret += line + '\n' ret += line + '\n'

View File

@@ -71,9 +71,12 @@ fi
/bin/echo -e "\n### Changing version in version.py..." /bin/echo -e "\n### Changing version in version.py..."
sed -i "s/__version__ = '.*'/__version__ = '$version'/" youtube_dl/version.py sed -i "s/__version__ = '.*'/__version__ = '$version'/" youtube_dl/version.py
/bin/echo -e "\n### Changing version in ChangeLog..."
sed -i "s/<unreleased>/$version/" ChangeLog
/bin/echo -e "\n### Committing documentation, templates and youtube_dl/version.py..." /bin/echo -e "\n### Committing documentation, templates and youtube_dl/version.py..."
make README.md CONTRIBUTING.md .github/ISSUE_TEMPLATE.md supportedsites make README.md CONTRIBUTING.md .github/ISSUE_TEMPLATE.md supportedsites
git add README.md CONTRIBUTING.md .github/ISSUE_TEMPLATE.md docs/supportedsites.md youtube_dl/version.py git add README.md CONTRIBUTING.md .github/ISSUE_TEMPLATE.md docs/supportedsites.md youtube_dl/version.py ChangeLog
git commit $gpg_sign_commits -m "release $version" git commit $gpg_sign_commits -m "release $version"
/bin/echo -e "\n### Now tagging, signing and pushing..." /bin/echo -e "\n### Now tagging, signing and pushing..."

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals from __future__ import unicode_literals
import itertools
import json import json
import os import os
import re import re
@@ -21,21 +22,26 @@ def format_size(bytes):
total_bytes = 0 total_bytes = 0
releases = json.loads(compat_urllib_request.urlopen( for page in itertools.count(1):
'https://api.github.com/repos/rg3/youtube-dl/releases').read().decode('utf-8')) releases = json.loads(compat_urllib_request.urlopen(
'https://api.github.com/repos/rg3/youtube-dl/releases?page=%s' % page
).read().decode('utf-8'))
for release in releases: if not releases:
compat_print(release['name']) break
for asset in release['assets']:
asset_name = asset['name'] for release in releases:
total_bytes += asset['download_count'] * asset['size'] compat_print(release['name'])
if all(not re.match(p, asset_name) for p in ( for asset in release['assets']:
r'^youtube-dl$', asset_name = asset['name']
r'^youtube-dl-\d{4}\.\d{2}\.\d{2}(?:\.\d+)?\.tar\.gz$', total_bytes += asset['download_count'] * asset['size']
r'^youtube-dl\.exe$')): if all(not re.match(p, asset_name) for p in (
continue r'^youtube-dl$',
compat_print( r'^youtube-dl-\d{4}\.\d{2}\.\d{2}(?:\.\d+)?\.tar\.gz$',
' %s size: %s downloads: %d' r'^youtube-dl\.exe$')):
% (asset_name, format_size(asset['size']), asset['download_count'])) continue
compat_print(
' %s size: %s downloads: %d'
% (asset_name, format_size(asset['size']), asset['download_count']))
compat_print('total downloads traffic: %s' % format_size(total_bytes)) compat_print('total downloads traffic: %s' % format_size(total_bytes))

View File

@@ -35,6 +35,7 @@
- **AlJazeera** - **AlJazeera**
- **Allocine** - **Allocine**
- **AlphaPorno** - **AlphaPorno**
- **AMCNetworks**
- **AnimeOnDemand** - **AnimeOnDemand**
- **anitube.se** - **anitube.se**
- **AnySex** - **AnySex**
@@ -142,7 +143,7 @@
- **CollegeRama** - **CollegeRama**
- **ComCarCoff** - **ComCarCoff**
- **ComedyCentral** - **ComedyCentral**
- **ComedyCentralShows**: The Daily Show / The Colbert Report - **ComedyCentralShortname**
- **ComedyCentralTV** - **ComedyCentralTV**
- **CondeNast**: Condé Nast media group: Allure, Architectural Digest, Ars Technica, Bon Appétit, Brides, Condé Nast, Condé Nast Traveler, Details, Epicurious, GQ, Glamour, Golf Digest, SELF, Teen Vogue, The New Yorker, Vanity Fair, Vogue, W Magazine, WIRED - **CondeNast**: Condé Nast media group: Allure, Architectural Digest, Ars Technica, Bon Appétit, Brides, Condé Nast, Condé Nast Traveler, Details, Epicurious, GQ, Glamour, Golf Digest, SELF, Teen Vogue, The New Yorker, Vanity Fair, Vogue, W Magazine, WIRED
- **Coub** - **Coub**
@@ -182,6 +183,7 @@
- **DigitallySpeaking** - **DigitallySpeaking**
- **Digiteka** - **Digiteka**
- **Discovery** - **Discovery**
- **DiscoveryGo**
- **Dotsub** - **Dotsub**
- **DouyuTV**: 斗鱼 - **DouyuTV**: 斗鱼
- **DPlay** - **DPlay**
@@ -237,7 +239,6 @@
- **FoxSports** - **FoxSports**
- **france2.fr:generation-quoi** - **france2.fr:generation-quoi**
- **FranceCulture** - **FranceCulture**
- **FranceCultureEmission**
- **FranceInter** - **FranceInter**
- **francetv**: France 2, 3, 4, 5 and Ô - **francetv**: France 2, 3, 4, 5 and Ô
- **francetvinfo.fr** - **francetvinfo.fr**
@@ -247,8 +248,8 @@
- **Funimation** - **Funimation**
- **FunnyOrDie** - **FunnyOrDie**
- **Fusion** - **Fusion**
- **FXNetworks**
- **GameInformer** - **GameInformer**
- **Gamekings**
- **GameOne** - **GameOne**
- **gameone:playlist** - **gameone:playlist**
- **Gamersyde** - **Gamersyde**
@@ -265,7 +266,6 @@
- **GloboArticle** - **GloboArticle**
- **GodTube** - **GodTube**
- **GodTV** - **GodTV**
- **GoldenMoustache**
- **Golem** - **Golem**
- **GoogleDrive** - **GoogleDrive**
- **Goshgay** - **Goshgay**
@@ -278,6 +278,8 @@
- **HellPorno** - **HellPorno**
- **Helsinki**: helsinki.fi - **Helsinki**: helsinki.fi
- **HentaiStigma** - **HentaiStigma**
- **HGTV**
- **hgtv.com:show**
- **HistoricFilms** - **HistoricFilms**
- **history:topic**: History.com Topic - **history:topic**: History.com Topic
- **hitbox** - **hitbox**
@@ -399,9 +401,9 @@
- **Moviezine** - **Moviezine**
- **MPORA** - **MPORA**
- **MSN** - **MSN**
- **mtg**: MTG services
- **MTV** - **MTV**
- **mtv.de** - **mtv.de**
- **mtviggy.com**
- **mtvservices:embedded** - **mtvservices:embedded**
- **MuenchenTV**: münchen.tv - **MuenchenTV**: münchen.tv
- **MusicPlayOn** - **MusicPlayOn**
@@ -417,7 +419,8 @@
- **MyVidster** - **MyVidster**
- **n-tv.de** - **n-tv.de**
- **natgeo** - **natgeo**
- **natgeo:channel** - **natgeo:episodeguide**
- **natgeo:video**
- **Naver** - **Naver**
- **NBA** - **NBA**
- **NBC** - **NBC**
@@ -441,7 +444,6 @@
- **Newstube** - **Newstube**
- **NextMedia**: 蘋果日報 - **NextMedia**: 蘋果日報
- **NextMediaActionNews**: 蘋果日報 - 動新聞 - **NextMediaActionNews**: 蘋果日報 - 動新聞
- **nextmovie.com**
- **nfb**: National Film Board of Canada - **nfb**: National Film Board of Canada
- **nfl.com** - **nfl.com**
- **nhl.com** - **nhl.com**
@@ -520,7 +522,9 @@
- **plus.google**: Google Plus - **plus.google**: Google Plus
- **pluzz.francetv.fr** - **pluzz.francetv.fr**
- **podomatic** - **podomatic**
- **Pokemon**
- **PolskieRadio** - **PolskieRadio**
- **PornCom**
- **PornHd** - **PornHd**
- **PornHub**: PornHub and Thumbzilla - **PornHub**: PornHub and Thumbzilla
- **PornHubPlaylist** - **PornHubPlaylist**
@@ -564,6 +568,7 @@
- **RoosterTeeth** - **RoosterTeeth**
- **RottenTomatoes** - **RottenTomatoes**
- **Roxwel** - **Roxwel**
- **Rozhlas**
- **RTBF** - **RTBF**
- **rte**: Raidió Teilifís Éireann TV - **rte**: Raidió Teilifís Éireann TV
- **rte:radio**: Raidió Teilifís Éireann radio - **rte:radio**: Raidió Teilifís Éireann radio
@@ -621,6 +626,7 @@
- **smotri:user**: Smotri.com user videos - **smotri:user**: Smotri.com user videos
- **Snotr** - **Snotr**
- **Sohu** - **Sohu**
- **SonyLIV**
- **soundcloud** - **soundcloud**
- **soundcloud:playlist** - **soundcloud:playlist**
- **soundcloud:search**: Soundcloud search - **soundcloud:search**: Soundcloud search
@@ -663,7 +669,6 @@
- **SztvHu** - **SztvHu**
- **Tagesschau** - **Tagesschau**
- **tagesschau:player** - **tagesschau:player**
- **Tapely**
- **Tass** - **Tass**
- **TDSLifeway** - **TDSLifeway**
- **teachertube**: teachertube.com videos - **teachertube**: teachertube.com videos
@@ -699,6 +704,7 @@
- **TNAFlix** - **TNAFlix**
- **TNAFlixNetworkEmbed** - **TNAFlixNetworkEmbed**
- **toggle** - **toggle**
- **Tosh**: Tosh.0
- **tou.tv** - **tou.tv**
- **Toypics**: Toypics user profile - **Toypics**: Toypics user profile
- **ToypicsUser**: Toypics user profile - **ToypicsUser**: Toypics user profile
@@ -728,8 +734,8 @@
- **tvigle**: Интернет-телевидение Tvigle.ru - **tvigle**: Интернет-телевидение Tvigle.ru
- **tvland.com** - **tvland.com**
- **tvp**: Telewizja Polska - **tvp**: Telewizja Polska
- **tvp:embed**: Telewizja Polska
- **tvp:series** - **tvp:series**
- **TVPlay**: TV3Play and related services
- **Tweakers** - **Tweakers**
- **twitch:chapter** - **twitch:chapter**
- **twitch:clips** - **twitch:clips**
@@ -745,6 +751,9 @@
- **udemy:course** - **udemy:course**
- **UDNEmbed**: 聯合影音 - **UDNEmbed**: 聯合影音
- **Unistra** - **Unistra**
- **uol.com.br**
- **uplynk**
- **uplynk:preplay**
- **Urort**: NRK P3 Urørt - **Urort**: NRK P3 Urørt
- **URPlay** - **URPlay**
- **USAToday** - **USAToday**
@@ -762,7 +771,9 @@
- **VevoPlaylist** - **VevoPlaylist**
- **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet - **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet
- **vh1.com** - **vh1.com**
- **Viafree**
- **Vice** - **Vice**
- **Viceland**
- **ViceShow** - **ViceShow**
- **Vidbit** - **Vidbit**
- **Viddler** - **Viddler**
@@ -807,6 +818,7 @@
- **vk:wallpost** - **vk:wallpost**
- **vlive** - **vlive**
- **Vodlocker** - **Vodlocker**
- **VODPlatform**
- **VoiceRepublic** - **VoiceRepublic**
- **VoxMedia** - **VoxMedia**
- **Vporn** - **Vporn**
@@ -883,4 +895,3 @@
- **ZDFChannel** - **ZDFChannel**
- **zingmp3:album**: mp3.zing.vn albums - **zingmp3:album**: mp3.zing.vn albums
- **zingmp3:song**: mp3.zing.vn songs - **zingmp3:song**: mp3.zing.vn songs
- **ZippCast**

View File

@@ -48,6 +48,9 @@ class TestInfoExtractor(unittest.TestCase):
self.assertEqual(ie._og_search_property('foobar', html), 'Foo') self.assertEqual(ie._og_search_property('foobar', html), 'Foo')
self.assertEqual(ie._og_search_property('test1', html), 'foo > < bar') self.assertEqual(ie._og_search_property('test1', html), 'foo > < bar')
self.assertEqual(ie._og_search_property('test2', html), 'foo >//< bar') self.assertEqual(ie._og_search_property('test2', html), 'foo >//< bar')
self.assertEqual(ie._og_search_property(('test0', 'test1'), html), 'foo > < bar')
self.assertRaises(RegexNotFoundError, ie._og_search_property, 'test0', html, None, fatal=True)
self.assertRaises(RegexNotFoundError, ie._og_search_property, ('test0', 'test00'), html, None, fatal=True)
def test_html_search_meta(self): def test_html_search_meta(self):
ie = self.ie ie = self.ie

View File

@@ -101,8 +101,6 @@ class TestAllURLsMatching(unittest.TestCase):
self.assertMatch(':ytsubs', ['youtube:subscriptions']) self.assertMatch(':ytsubs', ['youtube:subscriptions'])
self.assertMatch(':ytsubscriptions', ['youtube:subscriptions']) self.assertMatch(':ytsubscriptions', ['youtube:subscriptions'])
self.assertMatch(':ythistory', ['youtube:history']) self.assertMatch(':ythistory', ['youtube:history'])
self.assertMatch(':thedailyshow', ['ComedyCentralShows'])
self.assertMatch(':tds', ['ComedyCentralShows'])
def test_vimeo_matching(self): def test_vimeo_matching(self):
self.assertMatch('https://vimeo.com/channels/tributes', ['vimeo:channel']) self.assertMatch('https://vimeo.com/channels/tributes', ['vimeo:channel'])

View File

@@ -42,6 +42,7 @@ from youtube_dl.utils import (
ohdave_rsa_encrypt, ohdave_rsa_encrypt,
OnDemandPagedList, OnDemandPagedList,
orderedSet, orderedSet,
parse_age_limit,
parse_duration, parse_duration,
parse_filesize, parse_filesize,
parse_count, parse_count,
@@ -308,6 +309,7 @@ class TestUtil(unittest.TestCase):
self.assertEqual(unified_timestamp('25-09-2014'), 1411603200) self.assertEqual(unified_timestamp('25-09-2014'), 1411603200)
self.assertEqual(unified_timestamp('27.02.2016 17:30'), 1456594200) self.assertEqual(unified_timestamp('27.02.2016 17:30'), 1456594200)
self.assertEqual(unified_timestamp('UNKNOWN DATE FORMAT'), None) self.assertEqual(unified_timestamp('UNKNOWN DATE FORMAT'), None)
self.assertEqual(unified_timestamp('May 16, 2016 11:15 PM'), 1463440500)
def test_determine_ext(self): def test_determine_ext(self):
self.assertEqual(determine_ext('http://example.com/foo/bar.mp4/?download'), 'mp4') self.assertEqual(determine_ext('http://example.com/foo/bar.mp4/?download'), 'mp4')
@@ -431,6 +433,20 @@ class TestUtil(unittest.TestCase):
url_basename('http://media.w3.org/2010/05/sintel/trailer.mp4'), url_basename('http://media.w3.org/2010/05/sintel/trailer.mp4'),
'trailer.mp4') 'trailer.mp4')
def test_parse_age_limit(self):
self.assertEqual(parse_age_limit(None), None)
self.assertEqual(parse_age_limit(False), None)
self.assertEqual(parse_age_limit('invalid'), None)
self.assertEqual(parse_age_limit(0), 0)
self.assertEqual(parse_age_limit(18), 18)
self.assertEqual(parse_age_limit(21), 21)
self.assertEqual(parse_age_limit(22), None)
self.assertEqual(parse_age_limit('18'), 18)
self.assertEqual(parse_age_limit('18+'), 18)
self.assertEqual(parse_age_limit('PG-13'), 13)
self.assertEqual(parse_age_limit('TV-14'), 14)
self.assertEqual(parse_age_limit('TV-MA'), 17)
def test_parse_duration(self): def test_parse_duration(self):
self.assertEqual(parse_duration(None), None) self.assertEqual(parse_duration(None), None)
self.assertEqual(parse_duration(False), None) self.assertEqual(parse_duration(False), None)
@@ -801,7 +817,9 @@ class TestUtil(unittest.TestCase):
self.assertEqual(parse_filesize('2 MiB'), 2097152) self.assertEqual(parse_filesize('2 MiB'), 2097152)
self.assertEqual(parse_filesize('5 GB'), 5000000000) self.assertEqual(parse_filesize('5 GB'), 5000000000)
self.assertEqual(parse_filesize('1.2Tb'), 1200000000000) self.assertEqual(parse_filesize('1.2Tb'), 1200000000000)
self.assertEqual(parse_filesize('1.2tb'), 1200000000000)
self.assertEqual(parse_filesize('1,24 KB'), 1240) self.assertEqual(parse_filesize('1,24 KB'), 1240)
self.assertEqual(parse_filesize('1,24 kb'), 1240)
def test_parse_count(self): def test_parse_count(self):
self.assertEqual(parse_count(None), None) self.assertEqual(parse_count(None), None)
@@ -952,6 +970,7 @@ The first line
self.assertEqual(cli_option({'proxy': '127.0.0.1:3128'}, '--proxy', 'proxy'), ['--proxy', '127.0.0.1:3128']) self.assertEqual(cli_option({'proxy': '127.0.0.1:3128'}, '--proxy', 'proxy'), ['--proxy', '127.0.0.1:3128'])
self.assertEqual(cli_option({'proxy': None}, '--proxy', 'proxy'), []) self.assertEqual(cli_option({'proxy': None}, '--proxy', 'proxy'), [])
self.assertEqual(cli_option({}, '--proxy', 'proxy'), []) self.assertEqual(cli_option({}, '--proxy', 'proxy'), [])
self.assertEqual(cli_option({'retries': 10}, '--retries', 'retries'), ['--retries', '10'])
def test_cli_valueless_option(self): def test_cli_valueless_option(self):
self.assertEqual(cli_valueless_option( self.assertEqual(cli_valueless_option(

View File

@@ -0,0 +1,70 @@
#!/usr/bin/env python
# coding: utf-8
from __future__ import unicode_literals
import unittest
import sys
import os
import subprocess
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
class TestVerboseOutput(unittest.TestCase):
def test_private_info_arg(self):
outp = subprocess.Popen(
[
sys.executable, 'youtube_dl/__main__.py', '-v',
'--username', 'johnsmith@gmail.com',
'--password', 'secret',
], cwd=rootDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate()
self.assertTrue(b'--username' in serr)
self.assertTrue(b'johnsmith' not in serr)
self.assertTrue(b'--password' in serr)
self.assertTrue(b'secret' not in serr)
def test_private_info_shortarg(self):
outp = subprocess.Popen(
[
sys.executable, 'youtube_dl/__main__.py', '-v',
'-u', 'johnsmith@gmail.com',
'-p', 'secret',
], cwd=rootDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate()
self.assertTrue(b'-u' in serr)
self.assertTrue(b'johnsmith' not in serr)
self.assertTrue(b'-p' in serr)
self.assertTrue(b'secret' not in serr)
def test_private_info_eq(self):
outp = subprocess.Popen(
[
sys.executable, 'youtube_dl/__main__.py', '-v',
'--username=johnsmith@gmail.com',
'--password=secret',
], cwd=rootDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate()
self.assertTrue(b'--username' in serr)
self.assertTrue(b'johnsmith' not in serr)
self.assertTrue(b'--password' in serr)
self.assertTrue(b'secret' not in serr)
def test_private_info_shortarg_eq(self):
outp = subprocess.Popen(
[
sys.executable, 'youtube_dl/__main__.py', '-v',
'-u=johnsmith@gmail.com',
'-p=secret',
], cwd=rootDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate()
self.assertTrue(b'-u' in serr)
self.assertTrue(b'johnsmith' not in serr)
self.assertTrue(b'-p' in serr)
self.assertTrue(b'secret' not in serr)
if __name__ == '__main__':
unittest.main()

View File

@@ -249,7 +249,16 @@ class YoutubeDL(object):
source_address: (Experimental) Client-side IP address to bind to. source_address: (Experimental) Client-side IP address to bind to.
call_home: Boolean, true iff we are allowed to contact the call_home: Boolean, true iff we are allowed to contact the
youtube-dl servers for debugging. youtube-dl servers for debugging.
sleep_interval: Number of seconds to sleep before each download. sleep_interval: Number of seconds to sleep before each download when
used alone or a lower bound of a range for randomized
sleep before each download (minimum possible number
of seconds to sleep) when used along with
max_sleep_interval.
max_sleep_interval:Upper bound of a range for randomized sleep before each
download (maximum possible number of seconds to sleep).
Must only be used along with sleep_interval.
Actual sleep time will be a random float from range
[sleep_interval; max_sleep_interval].
listformats: Print an overview of available video formats and exit. listformats: Print an overview of available video formats and exit.
list_thumbnails: Print a table of all thumbnails and exit. list_thumbnails: Print a table of all thumbnails and exit.
match_filter: A function that gets called with the info_dict of match_filter: A function that gets called with the info_dict of
@@ -1594,7 +1603,9 @@ class YoutubeDL(object):
self.to_screen('[info] Video subtitle %s.%s is already_present' % (sub_lang, sub_format)) self.to_screen('[info] Video subtitle %s.%s is already_present' % (sub_lang, sub_format))
else: else:
self.to_screen('[info] Writing video subtitles to: ' + sub_filename) self.to_screen('[info] Writing video subtitles to: ' + sub_filename)
with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile: # Use newline='' to prevent conversion of newline characters
# See https://github.com/rg3/youtube-dl/issues/10268
with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8', newline='') as subfile:
subfile.write(sub_data) subfile.write(sub_data)
except (OSError, IOError): except (OSError, IOError):
self.report_error('Cannot write subtitles file ' + sub_filename) self.report_error('Cannot write subtitles file ' + sub_filename)

View File

@@ -145,6 +145,16 @@ def _real_main(argv=None):
if numeric_limit is None: if numeric_limit is None:
parser.error('invalid max_filesize specified') parser.error('invalid max_filesize specified')
opts.max_filesize = numeric_limit opts.max_filesize = numeric_limit
if opts.sleep_interval is not None:
if opts.sleep_interval < 0:
parser.error('sleep interval must be positive or 0')
if opts.max_sleep_interval is not None:
if opts.max_sleep_interval < 0:
parser.error('max sleep interval must be positive or 0')
if opts.max_sleep_interval < opts.sleep_interval:
parser.error('max sleep interval must be greater than or equal to min sleep interval')
else:
opts.max_sleep_interval = opts.sleep_interval
def parse_retries(retries): def parse_retries(retries):
if retries in ('inf', 'infinite'): if retries in ('inf', 'infinite'):
@@ -370,6 +380,7 @@ def _real_main(argv=None):
'source_address': opts.source_address, 'source_address': opts.source_address,
'call_home': opts.call_home, 'call_home': opts.call_home,
'sleep_interval': opts.sleep_interval, 'sleep_interval': opts.sleep_interval,
'max_sleep_interval': opts.max_sleep_interval,
'external_downloader': opts.external_downloader, 'external_downloader': opts.external_downloader,
'list_thumbnails': opts.list_thumbnails, 'list_thumbnails': opts.list_thumbnails,
'playlist_items': opts.playlist_items, 'playlist_items': opts.playlist_items,

View File

@@ -4,6 +4,7 @@ import os
import re import re
import sys import sys
import time import time
import random
from ..compat import compat_os_name from ..compat import compat_os_name
from ..utils import ( from ..utils import (
@@ -342,8 +343,11 @@ class FileDownloader(object):
}) })
return True return True
sleep_interval = self.params.get('sleep_interval') min_sleep_interval = self.params.get('sleep_interval')
if sleep_interval: if min_sleep_interval:
max_sleep_interval = self.params.get('max_sleep_interval', min_sleep_interval)
print(min_sleep_interval, max_sleep_interval)
sleep_interval = random.uniform(min_sleep_interval, max_sleep_interval)
self.to_screen('[download] Sleeping %s seconds...' % sleep_interval) self.to_screen('[download] Sleeping %s seconds...' % sleep_interval)
time.sleep(sleep_interval) time.sleep(sleep_interval)

View File

@@ -96,6 +96,12 @@ class CurlFD(ExternalFD):
cmd = [self.exe, '--location', '-o', tmpfilename] cmd = [self.exe, '--location', '-o', tmpfilename]
for key, val in info_dict['http_headers'].items(): for key, val in info_dict['http_headers'].items():
cmd += ['--header', '%s: %s' % (key, val)] cmd += ['--header', '%s: %s' % (key, val)]
cmd += self._bool_option('--continue-at', 'continuedl', '-', '0')
cmd += self._valueless_option('--silent', 'noprogress')
cmd += self._valueless_option('--verbose', 'verbose')
cmd += self._option('--limit-rate', 'ratelimit')
cmd += self._option('--retry', 'retries')
cmd += self._option('--max-filesize', 'max_filesize')
cmd += self._option('--interface', 'source_address') cmd += self._option('--interface', 'source_address')
cmd += self._option('--proxy', 'proxy') cmd += self._option('--proxy', 'proxy')
cmd += self._valueless_option('--insecure', 'nocheckcertificate') cmd += self._valueless_option('--insecure', 'nocheckcertificate')
@@ -103,6 +109,16 @@ class CurlFD(ExternalFD):
cmd += ['--', info_dict['url']] cmd += ['--', info_dict['url']]
return cmd return cmd
def _call_downloader(self, tmpfilename, info_dict):
cmd = [encodeArgument(a) for a in self._make_cmd(tmpfilename, info_dict)]
self._debug_cmd(cmd)
# curl writes the progress to stderr so don't capture it.
p = subprocess.Popen(cmd)
p.communicate()
return p.returncode
class AxelFD(ExternalFD): class AxelFD(ExternalFD):
AVAILABLE_OPT = '-V' AVAILABLE_OPT = '-V'

View File

@@ -20,6 +20,7 @@ from ..utils import (
encodeFilename, encodeFilename,
sanitize_open, sanitize_open,
parse_m3u8_attributes, parse_m3u8_attributes,
update_url_query,
) )
@@ -82,6 +83,7 @@ class HlsFD(FragmentFD):
self._prepare_and_start_frag_download(ctx) self._prepare_and_start_frag_download(ctx)
extra_param_to_segment_url = info_dict.get('extra_param_to_segment_url')
i = 0 i = 0
media_sequence = 0 media_sequence = 0
decrypt_info = {'METHOD': 'NONE'} decrypt_info = {'METHOD': 'NONE'}
@@ -95,6 +97,8 @@ class HlsFD(FragmentFD):
if re.match(r'^https?://', line) if re.match(r'^https?://', line)
else compat_urlparse.urljoin(man_url, line)) else compat_urlparse.urljoin(man_url, line))
frag_filename = '%s-Frag%d' % (ctx['tmpfilename'], i) frag_filename = '%s-Frag%d' % (ctx['tmpfilename'], i)
if extra_param_to_segment_url:
frag_url = update_url_query(frag_url, extra_param_to_segment_url)
success = ctx['dl'].download(frag_filename, {'url': frag_url}) success = ctx['dl'].download(frag_filename, {'url': frag_url})
if not success: if not success:
return False return False
@@ -120,6 +124,8 @@ class HlsFD(FragmentFD):
if not re.match(r'^https?://', decrypt_info['URI']): if not re.match(r'^https?://', decrypt_info['URI']):
decrypt_info['URI'] = compat_urlparse.urljoin( decrypt_info['URI'] = compat_urlparse.urljoin(
man_url, decrypt_info['URI']) man_url, decrypt_info['URI'])
if extra_param_to_segment_url:
decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_param_to_segment_url)
decrypt_info['KEY'] = self.ydl.urlopen(decrypt_info['URI']).read() decrypt_info['KEY'] = self.ydl.urlopen(decrypt_info['URI']).read()
elif line.startswith('#EXT-X-MEDIA-SEQUENCE'): elif line.startswith('#EXT-X-MEDIA-SEQUENCE'):
media_sequence = int(line[22:]) media_sequence = int(line[22:])

View File

@@ -0,0 +1,134 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import re
import time
import xml.etree.ElementTree as etree
from .common import InfoExtractor
from ..utils import (
unescapeHTML,
urlencode_postdata,
unified_timestamp,
)
class AdobePassIE(InfoExtractor):
_SERVICE_PROVIDER_TEMPLATE = 'https://sp.auth.adobe.com/adobe-services/%s'
_USER_AGENT = 'Mozilla/5.0 (X11; Linux i686; rv:47.0) Gecko/20100101 Firefox/47.0'
@staticmethod
def _get_mvpd_resource(provider_id, title, guid, rating):
channel = etree.Element('channel')
channel_title = etree.SubElement(channel, 'title')
channel_title.text = provider_id
item = etree.SubElement(channel, 'item')
resource_title = etree.SubElement(item, 'title')
resource_title.text = title
resource_guid = etree.SubElement(item, 'guid')
resource_guid.text = guid
resource_rating = etree.SubElement(item, 'media:rating')
resource_rating.attrib = {'scheme': 'urn:v-chip'}
resource_rating.text = rating
return '<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/">' + etree.tostring(channel).decode() + '</rss>'
def _extract_mvpd_auth(self, url, video_id, requestor_id, resource):
def xml_text(xml_str, tag):
return self._search_regex(
'<%s>(.+?)</%s>' % (tag, tag), xml_str, tag)
mvpd_headers = {
'ap_42': 'anonymous',
'ap_11': 'Linux i686',
'ap_z': self._USER_AGENT,
'User-Agent': self._USER_AGENT,
}
guid = xml_text(resource, 'guid')
requestor_info = self._downloader.cache.load('mvpd', requestor_id) or {}
authn_token = requestor_info.get('authn_token')
if authn_token:
token_expires = unified_timestamp(re.sub(r'[_ ]GMT', '', xml_text(authn_token, 'simpleTokenExpires')))
if token_expires and token_expires <= int(time.time()):
authn_token = None
requestor_info = {}
if not authn_token:
# TODO add support for other TV Providers
mso_id = 'DTV'
username, password = self._get_netrc_login_info(mso_id)
if not username or not password:
return ''
def post_form(form_page, note, data={}):
post_url = self._html_search_regex(r'<form[^>]+action=(["\'])(?P<url>.+?)\1', form_page, 'post url', group='url')
return self._download_webpage(
post_url, video_id, note, data=urlencode_postdata(data or self._hidden_inputs(form_page)), headers={
'Content-Type': 'application/x-www-form-urlencoded',
})
provider_redirect_page = self._download_webpage(
self._SERVICE_PROVIDER_TEMPLATE % 'authenticate/saml', video_id,
'Downloading Provider Redirect Page', query={
'noflash': 'true',
'mso_id': mso_id,
'requestor_id': requestor_id,
'no_iframe': 'false',
'domain_name': 'adobe.com',
'redirect_url': url,
})
provider_login_page = post_form(
provider_redirect_page, 'Downloading Provider Login Page')
mvpd_confirm_page = post_form(provider_login_page, 'Logging in', {
'username': username,
'password': password,
})
post_form(mvpd_confirm_page, 'Confirming Login')
session = self._download_webpage(
self._SERVICE_PROVIDER_TEMPLATE % 'session', video_id,
'Retrieving Session', data=urlencode_postdata({
'_method': 'GET',
'requestor_id': requestor_id,
}), headers=mvpd_headers)
if '<pendingLogout' in session:
self._downloader.cache.store('mvpd', requestor_id, {})
return self._extract_mvpd_auth(url, video_id, requestor_id, resource)
authn_token = unescapeHTML(xml_text(session, 'authnToken'))
requestor_info['authn_token'] = authn_token
self._downloader.cache.store('mvpd', requestor_id, requestor_info)
authz_token = requestor_info.get(guid)
if not authz_token:
authorize = self._download_webpage(
self._SERVICE_PROVIDER_TEMPLATE % 'authorize', video_id,
'Retrieving Authorization Token', data=urlencode_postdata({
'resource_id': resource,
'requestor_id': requestor_id,
'authentication_token': authn_token,
'mso_id': xml_text(authn_token, 'simpleTokenMsoID'),
'userMeta': '1',
}), headers=mvpd_headers)
if '<pendingLogout' in authorize:
self._downloader.cache.store('mvpd', requestor_id, {})
return self._extract_mvpd_auth(url, video_id, requestor_id, resource)
authz_token = unescapeHTML(xml_text(authorize, 'authzToken'))
requestor_info[guid] = authz_token
self._downloader.cache.store('mvpd', requestor_id, requestor_info)
mvpd_headers.update({
'ap_19': xml_text(authn_token, 'simpleSamlNameID'),
'ap_23': xml_text(authn_token, 'simpleSamlSessionIndex'),
})
short_authorize = self._download_webpage(
self._SERVICE_PROVIDER_TEMPLATE % 'shortAuthorize',
video_id, 'Retrieving Media Token', data=urlencode_postdata({
'authz_token': authz_token,
'requestor_id': requestor_id,
'session_guid': xml_text(authn_token, 'simpleTokenAuthenticationGuid'),
'hashed_guid': 'false',
}), headers=mvpd_headers)
if '<pendingLogout' in short_authorize:
self._downloader.cache.store('mvpd', requestor_id, {})
return self._extract_mvpd_auth(url, video_id, requestor_id, resource)
return short_authorize

View File

@@ -83,6 +83,20 @@ class AdultSwimIE(InfoExtractor):
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
} }
}, {
# heroMetadata.trailer
'url': 'http://www.adultswim.com/videos/decker/inside-decker-a-new-hero/',
'info_dict': {
'id': 'I0LQFQkaSUaFp8PnAWHhoQ',
'ext': 'mp4',
'title': 'Decker - Inside Decker: A New Hero',
'description': 'md5:c916df071d425d62d70c86d4399d3ee0',
'duration': 249.008,
},
'params': {
# m3u8 download
'skip_download': True,
}
}] }]
@staticmethod @staticmethod
@@ -133,20 +147,26 @@ class AdultSwimIE(InfoExtractor):
if video_info is None: if video_info is None:
if bootstrapped_data.get('slugged_video', {}).get('slug') == episode_path: if bootstrapped_data.get('slugged_video', {}).get('slug') == episode_path:
video_info = bootstrapped_data['slugged_video'] video_info = bootstrapped_data['slugged_video']
else: if not video_info:
raise ExtractorError('Unable to find video info') video_info = bootstrapped_data.get('heroMetadata', {}).get('trailer').get('video')
if not video_info:
raise ExtractorError('Unable to find video info')
show = bootstrapped_data['show'] show = bootstrapped_data['show']
show_title = show['title'] show_title = show['title']
stream = video_info.get('stream') stream = video_info.get('stream')
clips = [stream] if stream else video_info.get('clips') if stream and stream.get('videoPlaybackID'):
if not clips: segment_ids = [stream['videoPlaybackID']]
elif video_info.get('clips'):
segment_ids = [clip['videoPlaybackID'] for clip in video_info['clips']]
elif video_info.get('videoPlaybackID'):
segment_ids = [video_info['videoPlaybackID']]
else:
raise ExtractorError( raise ExtractorError(
'This video is only available via cable service provider subscription that' 'This video is only available via cable service provider subscription that'
' is not currently supported. You may want to use --cookies.' ' is not currently supported. You may want to use --cookies.'
if video_info.get('auth') is True else 'Unable to find stream or clips', if video_info.get('auth') is True else 'Unable to find stream or clips',
expected=True) expected=True)
segment_ids = [clip['videoPlaybackID'] for clip in clips]
episode_id = video_info['id'] episode_id = video_info['id']
episode_title = video_info['title'] episode_title = video_info['title']

View File

@@ -109,7 +109,10 @@ class AENetworksIE(AENetworksBaseIE):
info = self._parse_theplatform_metadata(theplatform_metadata) info = self._parse_theplatform_metadata(theplatform_metadata)
if theplatform_metadata.get('AETN$isBehindWall'): if theplatform_metadata.get('AETN$isBehindWall'):
requestor_id = self._DOMAIN_TO_REQUESTOR_ID[domain] requestor_id = self._DOMAIN_TO_REQUESTOR_ID[domain]
resource = '<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>%s</title><item><title>%s</title><guid>%s</guid><media:rating scheme="urn:v-chip">%s</media:rating></item></channel></rss>' % (requestor_id, theplatform_metadata['title'], theplatform_metadata['AETN$PPL_pplProgramId'], theplatform_metadata['ratings'][0]['rating']) resource = self._get_mvpd_resource(
requestor_id, theplatform_metadata['title'],
theplatform_metadata.get('AETN$PPL_pplProgramId') or theplatform_metadata.get('AETN$PPL_pplProgramId_OLD'),
theplatform_metadata['ratings'][0]['rating'])
query['auth'] = self._extract_mvpd_auth( query['auth'] = self._extract_mvpd_auth(
url, video_id, requestor_id, resource) url, video_id, requestor_id, resource)
info.update(self._search_json_ld(webpage, video_id, fatal=False)) info.update(self._search_json_ld(webpage, video_id, fatal=False))

View File

@@ -0,0 +1,91 @@
# coding: utf-8
from __future__ import unicode_literals
from .theplatform import ThePlatformIE
from ..utils import (
update_url_query,
parse_age_limit,
int_or_none,
)
class AMCNetworksIE(ThePlatformIE):
_VALID_URL = r'https?://(?:www\.)?(?:amc|bbcamerica|ifc|wetv)\.com/(?:movies/|shows/[^/]+/(?:full-episodes/)?season-\d+/episode-\d+(?:-(?:[^/]+/)?|/))(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'http://www.ifc.com/shows/maron/season-04/episode-01/step-1',
'md5': '',
'info_dict': {
'id': 's3MX01Nl4vPH',
'ext': 'mp4',
'title': 'Maron - Season 4 - Step 1',
'description': 'In denial about his current situation, Marc is reluctantly convinced by his friends to enter rehab. Starring Marc Maron and Constance Zimmer.',
'age_limit': 17,
'upload_date': '20160505',
'timestamp': 1462468831,
'uploader': 'AMCN',
},
'params': {
# m3u8 download
'skip_download': True,
},
}, {
'url': 'http://www.bbcamerica.com/shows/the-hunt/full-episodes/season-1/episode-01-the-hardest-challenge',
'only_matching': True,
}, {
'url': 'http://www.amc.com/shows/preacher/full-episodes/season-01/episode-00/pilot',
'only_matching': True,
}, {
'url': 'http://www.wetv.com/shows/million-dollar-matchmaker/season-01/episode-06-the-dumped-dj-and-shallow-hal',
'only_matching': True,
}, {
'url': 'http://www.ifc.com/movies/chaos',
'only_matching': True,
}]
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
query = {
'mbr': 'true',
'manifest': 'm3u',
}
media_url = self._search_regex(r'window\.platformLinkURL\s*=\s*[\'"]([^\'"]+)', webpage, 'media url')
theplatform_metadata = self._download_theplatform_metadata(self._search_regex(
r'https?://link.theplatform.com/s/([^?]+)', media_url, 'theplatform_path'), display_id)
info = self._parse_theplatform_metadata(theplatform_metadata)
video_id = theplatform_metadata['pid']
title = theplatform_metadata['title']
rating = theplatform_metadata['ratings'][0]['rating']
auth_required = self._search_regex(r'window\.authRequired\s*=\s*(true|false);', webpage, 'auth required')
if auth_required == 'true':
requestor_id = self._search_regex(r'window\.requestor_id\s*=\s*[\'"]([^\'"]+)', webpage, 'requestor id')
resource = self._get_mvpd_resource(requestor_id, title, video_id, rating)
query['auth'] = self._extract_mvpd_auth(url, video_id, requestor_id, resource)
media_url = update_url_query(media_url, query)
formats, subtitles = self._extract_theplatform_smil(media_url, video_id)
self._sort_formats(formats)
info.update({
'id': video_id,
'subtitles': subtitles,
'formats': formats,
'age_limit': parse_age_limit(parse_age_limit(rating)),
})
ns_keys = theplatform_metadata.get('$xmlns', {}).keys()
if ns_keys:
ns = list(ns_keys)[0]
series = theplatform_metadata.get(ns + '$show')
season_number = int_or_none(theplatform_metadata.get(ns + '$season'))
episode = theplatform_metadata.get(ns + '$episodeTitle')
episode_number = int_or_none(theplatform_metadata.get(ns + '$episode'))
if season_number:
title = 'Season %d - %s' % (season_number, title)
if series:
title = '%s - %s' % (series, title)
info.update({
'title': title,
'series': series,
'season_number': season_number,
'episode': episode,
'episode_number': episode_number,
})
return info

View File

@@ -123,6 +123,10 @@ class AolFeaturesIE(InfoExtractor):
'title': 'What To Watch - February 17, 2016', 'title': 'What To Watch - February 17, 2016',
}, },
'add_ie': ['FiveMin'], 'add_ie': ['FiveMin'],
'params': {
# encrypted m3u8 download
'skip_download': True,
},
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -1,8 +1,6 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
@@ -15,7 +13,7 @@ class AparatIE(InfoExtractor):
_TEST = { _TEST = {
'url': 'http://www.aparat.com/v/wP8On', 'url': 'http://www.aparat.com/v/wP8On',
'md5': '6714e0af7e0d875c5a39c4dc4ab46ad1', 'md5': '131aca2e14fe7c4dcb3c4877ba300c89',
'info_dict': { 'info_dict': {
'id': 'wP8On', 'id': 'wP8On',
'ext': 'mp4', 'ext': 'mp4',
@@ -31,13 +29,13 @@ class AparatIE(InfoExtractor):
# Note: There is an easier-to-parse configuration at # Note: There is an easier-to-parse configuration at
# http://www.aparat.com/video/video/config/videohash/%video_id # http://www.aparat.com/video/video/config/videohash/%video_id
# but the URL in there does not work # but the URL in there does not work
embed_url = ('http://www.aparat.com/video/video/embed/videohash/' + embed_url = 'http://www.aparat.com/video/video/embed/vt/frame/showvideo/yes/videohash/' + video_id
video_id + '/vt/frame')
webpage = self._download_webpage(embed_url, video_id) webpage = self._download_webpage(embed_url, video_id)
video_urls = [video_url.replace('\\/', '/') for video_url in re.findall( file_list = self._parse_json(self._search_regex(
r'(?:fileList\[[0-9]+\]\s*=|"file"\s*:)\s*"([^"]+)"', webpage)] r'fileList\s*=\s*JSON\.parse\(\'([^\']+)\'\)', webpage, 'file list'), video_id)
for i, video_url in enumerate(video_urls): for i, item in enumerate(file_list[0]):
video_url = item['file']
req = HEADRequest(video_url) req = HEADRequest(video_url)
res = self._request_webpage( res = self._request_webpage(
req, video_id, note='Testing video URL %d' % i, errnote=False) req, video_id, note='Testing video URL %d' % i, errnote=False)

View File

@@ -1,67 +1,65 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from .common import InfoExtractor from .jwplatform import JWPlatformBaseIE
from ..utils import unified_strdate from ..utils import (
unified_strdate,
clean_html,
)
class ArchiveOrgIE(InfoExtractor): class ArchiveOrgIE(JWPlatformBaseIE):
IE_NAME = 'archive.org' IE_NAME = 'archive.org'
IE_DESC = 'archive.org videos' IE_DESC = 'archive.org videos'
_VALID_URL = r'https?://(?:www\.)?archive\.org/details/(?P<id>[^?/]+)(?:[?].*)?$' _VALID_URL = r'https?://(?:www\.)?archive\.org/(?:details|embed)/(?P<id>[^/?#]+)(?:[?].*)?$'
_TESTS = [{ _TESTS = [{
'url': 'http://archive.org/details/XD300-23_68HighlightsAResearchCntAugHumanIntellect', 'url': 'http://archive.org/details/XD300-23_68HighlightsAResearchCntAugHumanIntellect',
'md5': '8af1d4cf447933ed3c7f4871162602db', 'md5': '8af1d4cf447933ed3c7f4871162602db',
'info_dict': { 'info_dict': {
'id': 'XD300-23_68HighlightsAResearchCntAugHumanIntellect', 'id': 'XD300-23_68HighlightsAResearchCntAugHumanIntellect',
'ext': 'ogv', 'ext': 'ogg',
'title': '1968 Demo - FJCC Conference Presentation Reel #1', 'title': '1968 Demo - FJCC Conference Presentation Reel #1',
'description': 'md5:1780b464abaca9991d8968c877bb53ed', 'description': 'md5:da45c349df039f1cc8075268eb1b5c25',
'upload_date': '19681210', 'upload_date': '19681210',
'uploader': 'SRI International' 'uploader': 'SRI International'
} }
}, { }, {
'url': 'https://archive.org/details/Cops1922', 'url': 'https://archive.org/details/Cops1922',
'md5': '18f2a19e6d89af8425671da1cf3d4e04', 'md5': 'bc73c8ab3838b5a8fc6c6651fa7b58ba',
'info_dict': { 'info_dict': {
'id': 'Cops1922', 'id': 'Cops1922',
'ext': 'ogv', 'ext': 'mp4',
'title': 'Buster Keaton\'s "Cops" (1922)', 'title': 'Buster Keaton\'s "Cops" (1922)',
'description': 'md5:70f72ee70882f713d4578725461ffcc3', 'description': 'md5:b4544662605877edd99df22f9620d858',
} }
}, {
'url': 'http://archive.org/embed/XD300-23_68HighlightsAResearchCntAugHumanIntellect',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(
'http://archive.org/embed/' + video_id, video_id)
jwplayer_playlist = self._parse_json(self._search_regex(
r"(?s)Play\('[^']+'\s*,\s*(\[.+\])\s*,\s*{.*?}\);",
webpage, 'jwplayer playlist'), video_id)
info = self._parse_jwplayer_data(
{'playlist': jwplayer_playlist}, video_id, base_url=url)
json_url = url + ('&' if '?' in url else '?') + 'output=json' def get_optional(metadata, field):
data = self._download_json(json_url, video_id) return metadata.get(field, [None])[0]
def get_optional(data_dict, field): metadata = self._download_json(
return data_dict['metadata'].get(field, [None])[0] 'http://archive.org/details/' + video_id, video_id, query={
'output': 'json',
title = get_optional(data, 'title') })['metadata']
description = get_optional(data, 'description') info.update({
uploader = get_optional(data, 'creator') 'title': get_optional(metadata, 'title') or info.get('title'),
upload_date = unified_strdate(get_optional(data, 'date')) 'description': clean_html(get_optional(metadata, 'description')),
})
formats = [ if info.get('_type') != 'playlist':
{ info.update({
'format': fdata['format'], 'uploader': get_optional(metadata, 'creator'),
'url': 'http://' + data['server'] + data['dir'] + fn, 'upload_date': unified_strdate(get_optional(metadata, 'date')),
'file_size': int(fdata['size']), })
} return info
for fn, fdata in data['files'].items()
if 'Video' in fdata['format']]
self._sort_formats(formats)
return {
'_type': 'video',
'id': video_id,
'title': title,
'formats': formats,
'description': description,
'uploader': uploader,
'upload_date': upload_date,
'thumbnail': data.get('misc', {}).get('image'),
}

View File

@@ -73,6 +73,7 @@ class ARDMediathekIE(InfoExtractor):
'description': 'md5:c0c1c8048514deaed2a73b3a60eecacb', 'description': 'md5:c0c1c8048514deaed2a73b3a60eecacb',
'duration': 3287, 'duration': 3287,
}, },
'skip': 'Video is no longer available',
}] }]
def _extract_media_info(self, media_info_url, webpage, video_id): def _extract_media_info(self, media_info_url, webpage, video_id):

View File

@@ -2,19 +2,23 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
import itertools
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
dict_get,
ExtractorError, ExtractorError,
float_or_none, float_or_none,
int_or_none, int_or_none,
parse_duration, parse_duration,
parse_iso8601, parse_iso8601,
try_get,
unescapeHTML, unescapeHTML,
) )
from ..compat import ( from ..compat import (
compat_etree_fromstring, compat_etree_fromstring,
compat_HTTPError, compat_HTTPError,
compat_urlparse,
) )
@@ -229,51 +233,6 @@ class BBCCoUkIE(InfoExtractor):
asx = self._download_xml(connection.get('href'), programme_id, 'Downloading ASX playlist') asx = self._download_xml(connection.get('href'), programme_id, 'Downloading ASX playlist')
return [ref.get('href') for ref in asx.findall('./Entry/ref')] return [ref.get('href') for ref in asx.findall('./Entry/ref')]
def _extract_connection(self, connection, programme_id):
formats = []
kind = connection.get('kind')
protocol = connection.get('protocol')
supplier = connection.get('supplier')
if protocol == 'http':
href = connection.get('href')
transfer_format = connection.get('transferFormat')
# ASX playlist
if supplier == 'asx':
for i, ref in enumerate(self._extract_asx_playlist(connection, programme_id)):
formats.append({
'url': ref,
'format_id': 'ref%s_%s' % (i, supplier),
})
# Skip DASH until supported
elif transfer_format == 'dash':
pass
elif transfer_format == 'hls':
formats.extend(self._extract_m3u8_formats(
href, programme_id, ext='mp4', entry_protocol='m3u8_native',
m3u8_id=supplier, fatal=False))
# Direct link
else:
formats.append({
'url': href,
'format_id': supplier or kind or protocol,
})
elif protocol == 'rtmp':
application = connection.get('application', 'ondemand')
auth_string = connection.get('authString')
identifier = connection.get('identifier')
server = connection.get('server')
formats.append({
'url': '%s://%s/%s?%s' % (protocol, server, application, auth_string),
'play_path': identifier,
'app': '%s?%s' % (application, auth_string),
'page_url': 'http://www.bbc.co.uk',
'player_url': 'http://www.bbc.co.uk/emp/releases/iplayer/revisions/617463_618125_4/617463_618125_4_emp.swf',
'rtmp_live': False,
'ext': 'flv',
'format_id': supplier,
})
return formats
def _extract_items(self, playlist): def _extract_items(self, playlist):
return playlist.findall('./{%s}item' % self._EMP_PLAYLIST_NS) return playlist.findall('./{%s}item' % self._EMP_PLAYLIST_NS)
@@ -294,46 +253,6 @@ class BBCCoUkIE(InfoExtractor):
def _extract_connections(self, media): def _extract_connections(self, media):
return self._findall_ns(media, './{%s}connection') return self._findall_ns(media, './{%s}connection')
def _extract_video(self, media, programme_id):
formats = []
vbr = int_or_none(media.get('bitrate'))
vcodec = media.get('encoding')
service = media.get('service')
width = int_or_none(media.get('width'))
height = int_or_none(media.get('height'))
file_size = int_or_none(media.get('media_file_size'))
for connection in self._extract_connections(media):
conn_formats = self._extract_connection(connection, programme_id)
for format in conn_formats:
format.update({
'width': width,
'height': height,
'vbr': vbr,
'vcodec': vcodec,
'filesize': file_size,
})
if service:
format['format_id'] = '%s_%s' % (service, format['format_id'])
formats.extend(conn_formats)
return formats
def _extract_audio(self, media, programme_id):
formats = []
abr = int_or_none(media.get('bitrate'))
acodec = media.get('encoding')
service = media.get('service')
for connection in self._extract_connections(media):
conn_formats = self._extract_connection(connection, programme_id)
for format in conn_formats:
format.update({
'format_id': '%s_%s' % (service, format['format_id']),
'abr': abr,
'acodec': acodec,
'vcodec': 'none',
})
formats.extend(conn_formats)
return formats
def _get_subtitles(self, media, programme_id): def _get_subtitles(self, media, programme_id):
subtitles = {} subtitles = {}
for connection in self._extract_connections(media): for connection in self._extract_connections(media):
@@ -379,13 +298,87 @@ class BBCCoUkIE(InfoExtractor):
def _process_media_selector(self, media_selection, programme_id): def _process_media_selector(self, media_selection, programme_id):
formats = [] formats = []
subtitles = None subtitles = None
urls = []
for media in self._extract_medias(media_selection): for media in self._extract_medias(media_selection):
kind = media.get('kind') kind = media.get('kind')
if kind == 'audio': if kind in ('video', 'audio'):
formats.extend(self._extract_audio(media, programme_id)) bitrate = int_or_none(media.get('bitrate'))
elif kind == 'video': encoding = media.get('encoding')
formats.extend(self._extract_video(media, programme_id)) service = media.get('service')
width = int_or_none(media.get('width'))
height = int_or_none(media.get('height'))
file_size = int_or_none(media.get('media_file_size'))
for connection in self._extract_connections(media):
href = connection.get('href')
if href in urls:
continue
if href:
urls.append(href)
conn_kind = connection.get('kind')
protocol = connection.get('protocol')
supplier = connection.get('supplier')
transfer_format = connection.get('transferFormat')
format_id = supplier or conn_kind or protocol
if service:
format_id = '%s_%s' % (service, format_id)
# ASX playlist
if supplier == 'asx':
for i, ref in enumerate(self._extract_asx_playlist(connection, programme_id)):
formats.append({
'url': ref,
'format_id': 'ref%s_%s' % (i, format_id),
})
elif transfer_format == 'dash':
formats.extend(self._extract_mpd_formats(
href, programme_id, mpd_id=format_id, fatal=False))
elif transfer_format == 'hls':
formats.extend(self._extract_m3u8_formats(
href, programme_id, ext='mp4', entry_protocol='m3u8_native',
m3u8_id=format_id, fatal=False))
elif transfer_format == 'hds':
formats.extend(self._extract_f4m_formats(
href, programme_id, f4m_id=format_id, fatal=False))
else:
if not service and not supplier and bitrate:
format_id += '-%d' % bitrate
fmt = {
'format_id': format_id,
'filesize': file_size,
}
if kind == 'video':
fmt.update({
'width': width,
'height': height,
'vbr': bitrate,
'vcodec': encoding,
})
else:
fmt.update({
'abr': bitrate,
'acodec': encoding,
'vcodec': 'none',
})
if protocol == 'http':
# Direct link
fmt.update({
'url': href,
})
elif protocol == 'rtmp':
application = connection.get('application', 'ondemand')
auth_string = connection.get('authString')
identifier = connection.get('identifier')
server = connection.get('server')
fmt.update({
'url': '%s://%s/%s?%s' % (protocol, server, application, auth_string),
'play_path': identifier,
'app': '%s?%s' % (application, auth_string),
'page_url': 'http://www.bbc.co.uk',
'player_url': 'http://www.bbc.co.uk/emp/releases/iplayer/revisions/617463_618125_4/617463_618125_4_emp.swf',
'rtmp_live': False,
'ext': 'flv',
})
formats.append(fmt)
elif kind == 'captions': elif kind == 'captions':
subtitles = self.extract_subtitles(media, programme_id) subtitles = self.extract_subtitles(media, programme_id)
return formats, subtitles return formats, subtitles
@@ -589,7 +582,7 @@ class BBCIE(BBCCoUkIE):
'info_dict': { 'info_dict': {
'id': '150615_telabyad_kentin_cogu', 'id': '150615_telabyad_kentin_cogu',
'ext': 'mp4', 'ext': 'mp4',
'title': "Tel Abyad'da IŞİD bayrağı indirildi YPG bayrağı çekildi", 'title': "YPG: Tel Abyad'ın tamamı kontrolümüzde",
'description': 'md5:33a4805a855c9baf7115fcbde57e7025', 'description': 'md5:33a4805a855c9baf7115fcbde57e7025',
'timestamp': 1434397334, 'timestamp': 1434397334,
'upload_date': '20150615', 'upload_date': '20150615',
@@ -654,6 +647,23 @@ class BBCIE(BBCCoUkIE):
# rtmp download # rtmp download
'skip_download': True, 'skip_download': True,
} }
}, {
# single video embedded with Morph
'url': 'http://www.bbc.co.uk/sport/live/olympics/36895975',
'info_dict': {
'id': 'p041vhd0',
'ext': 'mp4',
'title': "Nigeria v Japan - Men's First Round",
'description': 'Live coverage of the first round from Group B at the Amazonia Arena.',
'duration': 7980,
'uploader': 'BBC Sport',
'uploader_id': 'bbc_sport',
},
'params': {
# m3u8 download
'skip_download': True,
},
'skip': 'Georestricted to UK',
}, { }, {
# single video with playlist.sxml URL in playlist param # single video with playlist.sxml URL in playlist param
'url': 'http://www.bbc.com/sport/0/football/33653409', 'url': 'http://www.bbc.com/sport/0/football/33653409',
@@ -751,7 +761,7 @@ class BBCIE(BBCCoUkIE):
webpage = self._download_webpage(url, playlist_id) webpage = self._download_webpage(url, playlist_id)
json_ld_info = self._search_json_ld(webpage, playlist_id, default=None) json_ld_info = self._search_json_ld(webpage, playlist_id, default={})
timestamp = json_ld_info.get('timestamp') timestamp = json_ld_info.get('timestamp')
playlist_title = json_ld_info.get('title') playlist_title = json_ld_info.get('title')
@@ -820,13 +830,19 @@ class BBCIE(BBCCoUkIE):
# http://www.bbc.com/turkce/multimedya/2015/10/151010_vid_ankara_patlama_ani) # http://www.bbc.com/turkce/multimedya/2015/10/151010_vid_ankara_patlama_ani)
playlist = data_playable.get('otherSettings', {}).get('playlist', {}) playlist = data_playable.get('otherSettings', {}).get('playlist', {})
if playlist: if playlist:
for key in ('progressiveDownload', 'streaming'): entry = None
for key in ('streaming', 'progressiveDownload'):
playlist_url = playlist.get('%sUrl' % key) playlist_url = playlist.get('%sUrl' % key)
if not playlist_url: if not playlist_url:
continue continue
try: try:
entries.append(self._extract_from_playlist_sxml( info = self._extract_from_playlist_sxml(
playlist_url, playlist_id, timestamp)) playlist_url, playlist_id, timestamp)
if not entry:
entry = info
else:
entry['title'] = info['title']
entry['formats'].extend(info['formats'])
except Exception as e: except Exception as e:
# Some playlist URL may fail with 500, at the same time # Some playlist URL may fail with 500, at the same time
# the other one may work fine (e.g. # the other one may work fine (e.g.
@@ -834,6 +850,9 @@ class BBCIE(BBCCoUkIE):
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 500: if isinstance(e.cause, compat_HTTPError) and e.cause.code == 500:
continue continue
raise raise
if entry:
self._sort_formats(entry['formats'])
entries.append(entry)
if entries: if entries:
return self.playlist_result(entries, playlist_id, playlist_title, playlist_description) return self.playlist_result(entries, playlist_id, playlist_title, playlist_description)
@@ -866,6 +885,50 @@ class BBCIE(BBCCoUkIE):
'subtitles': subtitles, 'subtitles': subtitles,
} }
# Morph based embed (e.g. http://www.bbc.co.uk/sport/live/olympics/36895975)
# There are several setPayload calls may be present but the video
# seems to be always related to the first one
morph_payload = self._parse_json(
self._search_regex(
r'Morph\.setPayload\([^,]+,\s*({.+?})\);',
webpage, 'morph payload', default='{}'),
playlist_id, fatal=False)
if morph_payload:
components = try_get(morph_payload, lambda x: x['body']['components'], list) or []
for component in components:
if not isinstance(component, dict):
continue
lead_media = try_get(component, lambda x: x['props']['leadMedia'], dict)
if not lead_media:
continue
identifiers = lead_media.get('identifiers')
if not identifiers or not isinstance(identifiers, dict):
continue
programme_id = identifiers.get('vpid') or identifiers.get('playablePid')
if not programme_id:
continue
title = lead_media.get('title') or self._og_search_title(webpage)
formats, subtitles = self._download_media_selector(programme_id)
self._sort_formats(formats)
description = lead_media.get('summary')
uploader = lead_media.get('masterBrand')
uploader_id = lead_media.get('mid')
duration = None
duration_d = lead_media.get('duration')
if isinstance(duration_d, dict):
duration = parse_duration(dict_get(
duration_d, ('rawDuration', 'formattedDuration', 'spokenDuration')))
return {
'id': programme_id,
'title': title,
'description': description,
'duration': duration,
'uploader': uploader,
'uploader_id': uploader_id,
'formats': formats,
'subtitles': subtitles,
}
def extract_all(pattern): def extract_all(pattern):
return list(filter(None, map( return list(filter(None, map(
lambda s: self._parse_json(s, playlist_id, fatal=False), lambda s: self._parse_json(s, playlist_id, fatal=False),
@@ -883,7 +946,7 @@ class BBCIE(BBCCoUkIE):
r'setPlaylist\("(%s)"\)' % EMBED_URL, webpage)) r'setPlaylist\("(%s)"\)' % EMBED_URL, webpage))
if entries: if entries:
return self.playlist_result( return self.playlist_result(
[self.url_result(entry, 'BBCCoUk') for entry in entries], [self.url_result(entry_, 'BBCCoUk') for entry_ in entries],
playlist_id, playlist_title, playlist_description) playlist_id, playlist_title, playlist_description)
# Multiple video article (e.g. http://www.bbc.com/news/world-europe-32668511) # Multiple video article (e.g. http://www.bbc.com/news/world-europe-32668511)
@@ -995,19 +1058,35 @@ class BBCCoUkArticleIE(InfoExtractor):
class BBCCoUkPlaylistBaseIE(InfoExtractor): class BBCCoUkPlaylistBaseIE(InfoExtractor):
def _entries(self, webpage, url, playlist_id):
single_page = 'page' in compat_urlparse.parse_qs(
compat_urlparse.urlparse(url).query)
for page_num in itertools.count(2):
for video_id in re.findall(
self._VIDEO_ID_TEMPLATE % BBCCoUkIE._ID_REGEX, webpage):
yield self.url_result(
self._URL_TEMPLATE % video_id, BBCCoUkIE.ie_key())
if single_page:
return
next_page = self._search_regex(
r'<li[^>]+class=(["\'])pagination_+next\1[^>]*><a[^>]+href=(["\'])(?P<url>(?:(?!\2).)+)\2',
webpage, 'next page url', default=None, group='url')
if not next_page:
break
webpage = self._download_webpage(
compat_urlparse.urljoin(url, next_page), playlist_id,
'Downloading page %d' % page_num, page_num)
def _real_extract(self, url): def _real_extract(self, url):
playlist_id = self._match_id(url) playlist_id = self._match_id(url)
webpage = self._download_webpage(url, playlist_id) webpage = self._download_webpage(url, playlist_id)
entries = [
self.url_result(self._URL_TEMPLATE % video_id, BBCCoUkIE.ie_key())
for video_id in re.findall(
self._VIDEO_ID_TEMPLATE % BBCCoUkIE._ID_REGEX, webpage)]
title, description = self._extract_title_and_description(webpage) title, description = self._extract_title_and_description(webpage)
return self.playlist_result(entries, playlist_id, title, description) return self.playlist_result(
self._entries(webpage, url, playlist_id),
playlist_id, title, description)
class BBCCoUkIPlayerPlaylistIE(BBCCoUkPlaylistBaseIE): class BBCCoUkIPlayerPlaylistIE(BBCCoUkPlaylistBaseIE):
@@ -1056,6 +1135,24 @@ class BBCCoUkPlaylistIE(BBCCoUkPlaylistBaseIE):
'description': 'French thriller serial about a missing teenager.', 'description': 'French thriller serial about a missing teenager.',
}, },
'playlist_mincount': 7, 'playlist_mincount': 7,
}, {
# multipage playlist, explicit page
'url': 'http://www.bbc.co.uk/programmes/b00mfl7n/clips?page=1',
'info_dict': {
'id': 'b00mfl7n',
'title': 'Frozen Planet - Clips - BBC One',
'description': 'md5:65dcbf591ae628dafe32aa6c4a4a0d8c',
},
'playlist_mincount': 24,
}, {
# multipage playlist, all pages
'url': 'http://www.bbc.co.uk/programmes/b00mfl7n/clips',
'info_dict': {
'id': 'b00mfl7n',
'title': 'Frozen Planet - Clips - BBC One',
'description': 'md5:65dcbf591ae628dafe32aa6c4a4a0d8c',
},
'playlist_mincount': 142,
}, { }, {
'url': 'http://www.bbc.co.uk/programmes/b05rcz9v/broadcasts/2016/06', 'url': 'http://www.bbc.co.uk/programmes/b05rcz9v/broadcasts/2016/06',
'only_matching': True, 'only_matching': True,

View File

@@ -11,22 +11,13 @@ from ..compat import compat_urllib_parse_unquote
class BigflixIE(InfoExtractor): class BigflixIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?bigflix\.com/.+/(?P<id>[0-9]+)' _VALID_URL = r'https?://(?:www\.)?bigflix\.com/.+/(?P<id>[0-9]+)'
_TESTS = [{ _TESTS = [{
'url': 'http://www.bigflix.com/Hindi-movies/Action-movies/Singham-Returns/16537',
'md5': 'ec76aa9b1129e2e5b301a474e54fab74',
'info_dict': {
'id': '16537',
'ext': 'mp4',
'title': 'Singham Returns',
'description': 'md5:3d2ba5815f14911d5cc6a501ae0cf65d',
}
}, {
# 2 formats # 2 formats
'url': 'http://www.bigflix.com/Tamil-movies/Drama-movies/Madarasapatinam/16070', 'url': 'http://www.bigflix.com/Tamil-movies/Drama-movies/Madarasapatinam/16070',
'info_dict': { 'info_dict': {
'id': '16070', 'id': '16070',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Madarasapatinam', 'title': 'Madarasapatinam',
'description': 'md5:63b9b8ed79189c6f0418c26d9a3452ca', 'description': 'md5:9f0470b26a4ba8e824c823b5d95c2f6b',
'formats': 'mincount:2', 'formats': 'mincount:2',
}, },
'params': { 'params': {

View File

@@ -25,13 +25,13 @@ class BiliBiliIE(InfoExtractor):
_TESTS = [{ _TESTS = [{
'url': 'http://www.bilibili.tv/video/av1074402/', 'url': 'http://www.bilibili.tv/video/av1074402/',
'md5': '5f7d29e1a2872f3df0cf76b1f87d3788', 'md5': '9fa226fe2b8a9a4d5a69b4c6a183417e',
'info_dict': { 'info_dict': {
'id': '1554319', 'id': '1554319',
'ext': 'flv', 'ext': 'mp4',
'title': '【金坷垃】金泡沫', 'title': '【金坷垃】金泡沫',
'description': 'md5:ce18c2a2d2193f0df2917d270f2e5923', 'description': 'md5:ce18c2a2d2193f0df2917d270f2e5923',
'duration': 308.067, 'duration': 308.315,
'timestamp': 1398012660, 'timestamp': 1398012660,
'upload_date': '20140420', 'upload_date': '20140420',
'thumbnail': 're:^https?://.+\.jpg', 'thumbnail': 're:^https?://.+\.jpg',
@@ -41,73 +41,33 @@ class BiliBiliIE(InfoExtractor):
}, { }, {
'url': 'http://www.bilibili.com/video/av1041170/', 'url': 'http://www.bilibili.com/video/av1041170/',
'info_dict': { 'info_dict': {
'id': '1041170', 'id': '1507019',
'ext': 'mp4',
'title': '【BD1080P】刀语【诸神&异域】', 'title': '【BD1080P】刀语【诸神&异域】',
'description': '这是个神奇的故事~每个人不留弹幕不给走哦~切利哦!~', 'description': '这是个神奇的故事~每个人不留弹幕不给走哦~切利哦!~',
'timestamp': 1396530060,
'upload_date': '20140403',
'uploader': '枫叶逝去',
'uploader_id': '520116',
}, },
'playlist_count': 9,
}, { }, {
'url': 'http://www.bilibili.com/video/av4808130/', 'url': 'http://www.bilibili.com/video/av4808130/',
'info_dict': { 'info_dict': {
'id': '4808130', 'id': '7802182',
'ext': 'mp4',
'title': '【长篇】哆啦A梦443【钉铛】', 'title': '【长篇】哆啦A梦443【钉铛】',
'description': '(2016.05.27)来组合客人的脸吧&amp;amp;寻母六千里锭 抱歉,又轮到周日上班现在才到家 封面www.pixiv.net/member_illust.php?mode=medium&amp;amp;illust_id=56912929', 'description': '(2016.05.27)来组合客人的脸吧&amp;amp;寻母六千里锭 抱歉,又轮到周日上班现在才到家 封面www.pixiv.net/member_illust.php?mode=medium&amp;amp;illust_id=56912929',
'timestamp': 1464564180,
'upload_date': '20160529',
'uploader': '喜欢拉面',
'uploader_id': '151066',
}, },
'playlist': [{
'md5': '55cdadedf3254caaa0d5d27cf20a8f9c',
'info_dict': {
'id': '4808130_part1',
'ext': 'flv',
'title': '【长篇】哆啦A梦443【钉铛】',
'description': '(2016.05.27)来组合客人的脸吧&amp;amp;寻母六千里锭 抱歉,又轮到周日上班现在才到家 封面www.pixiv.net/member_illust.php?mode=medium&amp;amp;illust_id=56912929',
'timestamp': 1464564180,
'upload_date': '20160529',
'uploader': '喜欢拉面',
'uploader_id': '151066',
},
}, {
'md5': '926f9f67d0c482091872fbd8eca7ea3d',
'info_dict': {
'id': '4808130_part2',
'ext': 'flv',
'title': '【长篇】哆啦A梦443【钉铛】',
'description': '(2016.05.27)来组合客人的脸吧&amp;amp;寻母六千里锭 抱歉,又轮到周日上班现在才到家 封面www.pixiv.net/member_illust.php?mode=medium&amp;amp;illust_id=56912929',
'timestamp': 1464564180,
'upload_date': '20160529',
'uploader': '喜欢拉面',
'uploader_id': '151066',
},
}, {
'md5': '4b7b225b968402d7c32348c646f1fd83',
'info_dict': {
'id': '4808130_part3',
'ext': 'flv',
'title': '【长篇】哆啦A梦443【钉铛】',
'description': '(2016.05.27)来组合客人的脸吧&amp;amp;寻母六千里锭 抱歉,又轮到周日上班现在才到家 封面www.pixiv.net/member_illust.php?mode=medium&amp;amp;illust_id=56912929',
'timestamp': 1464564180,
'upload_date': '20160529',
'uploader': '喜欢拉面',
'uploader_id': '151066',
},
}, {
'md5': '7b795e214166501e9141139eea236e91',
'info_dict': {
'id': '4808130_part4',
'ext': 'flv',
'title': '【长篇】哆啦A梦443【钉铛】',
'description': '(2016.05.27)来组合客人的脸吧&amp;amp;寻母六千里锭 抱歉,又轮到周日上班现在才到家 封面www.pixiv.net/member_illust.php?mode=medium&amp;amp;illust_id=56912929',
'timestamp': 1464564180,
'upload_date': '20160529',
'uploader': '喜欢拉面',
'uploader_id': '151066',
},
}],
}, { }, {
# Missing upload time # Missing upload time
'url': 'http://www.bilibili.com/video/av1867637/', 'url': 'http://www.bilibili.com/video/av1867637/',
'info_dict': { 'info_dict': {
'id': '2880301', 'id': '2880301',
'ext': 'flv', 'ext': 'mp4',
'title': '【HDTV】【喜剧】岳父岳母真难当 2014【法国票房冠军】', 'title': '【HDTV】【喜剧】岳父岳母真难当 2014【法国票房冠军】',
'description': '一个信奉天主教的法国旧式传统资产阶级家庭中有四个女儿。三个女儿却分别找了阿拉伯、犹太、中国丈夫,老夫老妻唯独期盼剩下未嫁的小女儿能找一个信奉天主教的法国白人,结果没想到小女儿找了一位非裔黑人……【这次应该不会跳帧了】', 'description': '一个信奉天主教的法国旧式传统资产阶级家庭中有四个女儿。三个女儿却分别找了阿拉伯、犹太、中国丈夫,老夫老妻唯独期盼剩下未嫁的小女儿能找一个信奉天主教的法国白人,结果没想到小女儿找了一位非裔黑人……【这次应该不会跳帧了】',
'uploader': '黑夜为猫', 'uploader': '黑夜为猫',

View File

@@ -24,7 +24,8 @@ class BIQLEIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Ребенок в шоке от автоматической мойки', 'title': 'Ребенок в шоке от автоматической мойки',
'uploader': 'Dmitry Kotov', 'uploader': 'Dmitry Kotov',
} },
'skip': ' This video was marked as adult. Embedding adult videos on external sites is prohibited.',
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -1,3 +1,4 @@
# coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
@@ -20,6 +21,18 @@ class BloombergIE(InfoExtractor):
'params': { 'params': {
'format': 'best[format_id^=hds]', 'format': 'best[format_id^=hds]',
}, },
}, {
# video ID in BPlayer(...)
'url': 'http://www.bloomberg.com/features/2016-hello-world-new-zealand/',
'info_dict': {
'id': '938c7e72-3f25-4ddb-8b85-a9be731baa74',
'ext': 'flv',
'title': 'Meet the Real-Life Tech Wizards of Middle Earth',
'description': 'Hello World, Episode 1: New Zealands freaky AI babies, robot exoskeletons, and a virtual you.',
},
'params': {
'format': 'best[format_id^=hds]',
},
}, { }, {
'url': 'http://www.bloomberg.com/news/articles/2015-11-12/five-strange-things-that-have-been-happening-in-financial-markets', 'url': 'http://www.bloomberg.com/news/articles/2015-11-12/five-strange-things-that-have-been-happening-in-financial-markets',
'only_matching': True, 'only_matching': True,
@@ -33,7 +46,11 @@ class BloombergIE(InfoExtractor):
webpage = self._download_webpage(url, name) webpage = self._download_webpage(url, name)
video_id = self._search_regex( video_id = self._search_regex(
r'["\']bmmrId["\']\s*:\s*(["\'])(?P<url>.+?)\1', r'["\']bmmrId["\']\s*:\s*(["\'])(?P<url>.+?)\1',
webpage, 'id', group='url') webpage, 'id', group='url', default=None)
if not video_id:
bplayer_data = self._parse_json(self._search_regex(
r'BPlayer\(null,\s*({[^;]+})\);', webpage, 'id'), name)
video_id = bplayer_data['id']
title = re.sub(': Video$', '', self._og_search_title(webpage)) title = re.sub(': Video$', '', self._og_search_title(webpage))
embed_info = self._download_json( embed_info = self._download_json(

View File

@@ -1,7 +1,6 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import datetime
import re import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -10,8 +9,10 @@ from ..compat import (
compat_urlparse, compat_urlparse,
) )
from ..utils import ( from ..utils import (
parse_iso8601, clean_html,
parse_duration,
str_to_int, str_to_int,
unified_strdate,
) )
@@ -26,14 +27,14 @@ class CamdemyIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Ch1-1 Introduction, Signals (02-23-2012)', 'title': 'Ch1-1 Introduction, Signals (02-23-2012)',
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
'description': '',
'creator': 'ss11spring', 'creator': 'ss11spring',
'duration': 1591,
'upload_date': '20130114', 'upload_date': '20130114',
'timestamp': 1358154556,
'view_count': int, 'view_count': int,
} }
}, { }, {
# With non-empty description # With non-empty description
# webpage returns "No permission or not login"
'url': 'http://www.camdemy.com/media/13885', 'url': 'http://www.camdemy.com/media/13885',
'md5': '4576a3bb2581f86c61044822adbd1249', 'md5': '4576a3bb2581f86c61044822adbd1249',
'info_dict': { 'info_dict': {
@@ -41,64 +42,71 @@ class CamdemyIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'EverCam + Camdemy QuickStart', 'title': 'EverCam + Camdemy QuickStart',
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
'description': 'md5:050b62f71ed62928f8a35f1a41e186c9', 'description': 'md5:2a9f989c2b153a2342acee579c6e7db6',
'creator': 'evercam', 'creator': 'evercam',
'upload_date': '20140620', 'duration': 318,
'timestamp': 1403271569,
} }
}, { }, {
# External source # External source (YouTube)
'url': 'http://www.camdemy.com/media/14842', 'url': 'http://www.camdemy.com/media/14842',
'md5': '50e1c3c3aa233d3d7b7daa2fa10b1cf7',
'info_dict': { 'info_dict': {
'id': '2vsYQzNIsJo', 'id': '2vsYQzNIsJo',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Excel 2013 Tutorial - How to add Password Protection',
'description': 'Excel 2013 Tutorial for Beginners - How to add Password Protection',
'upload_date': '20130211', 'upload_date': '20130211',
'uploader': 'Hun Kim', 'uploader': 'Hun Kim',
'description': 'Excel 2013 Tutorial for Beginners - How to add Password Protection',
'uploader_id': 'hunkimtutorials', 'uploader_id': 'hunkimtutorials',
'title': 'Excel 2013 Tutorial - How to add Password Protection', },
} 'params': {
'skip_download': True,
},
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
page = self._download_webpage(url, video_id)
webpage = self._download_webpage(url, video_id)
src_from = self._html_search_regex( src_from = self._html_search_regex(
r"<div class='srcFrom'>Source: <a title='([^']+)'", page, r"class=['\"]srcFrom['\"][^>]*>Sources?(?:\s+from)?\s*:\s*<a[^>]+(?:href|title)=(['\"])(?P<url>(?:(?!\1).)+)\1",
'external source', default=None) webpage, 'external source', default=None, group='url')
if src_from: if src_from:
return self.url_result(src_from) return self.url_result(src_from)
oembed_obj = self._download_json( oembed_obj = self._download_json(
'http://www.camdemy.com/oembed/?format=json&url=' + url, video_id) 'http://www.camdemy.com/oembed/?format=json&url=' + url, video_id)
title = oembed_obj['title']
thumb_url = oembed_obj['thumbnail_url'] thumb_url = oembed_obj['thumbnail_url']
video_folder = compat_urlparse.urljoin(thumb_url, 'video/') video_folder = compat_urlparse.urljoin(thumb_url, 'video/')
file_list_doc = self._download_xml( file_list_doc = self._download_xml(
compat_urlparse.urljoin(video_folder, 'fileList.xml'), compat_urlparse.urljoin(video_folder, 'fileList.xml'),
video_id, 'Filelist XML') video_id, 'Downloading filelist XML')
file_name = file_list_doc.find('./video/item/fileName').text file_name = file_list_doc.find('./video/item/fileName').text
video_url = compat_urlparse.urljoin(video_folder, file_name) video_url = compat_urlparse.urljoin(video_folder, file_name)
timestamp = parse_iso8601(self._html_search_regex( # Some URLs return "No permission or not login" in a webpage despite being
r"<div class='title'>Posted\s*:</div>\s*<div class='value'>([^<>]+)<", # freely available via oembed JSON URL (e.g. http://www.camdemy.com/media/13885)
page, 'creation time', fatal=False), upload_date = unified_strdate(self._search_regex(
delimiter=' ', timezone=datetime.timedelta(hours=8)) r'>published on ([^<]+)<', webpage,
view_count = str_to_int(self._html_search_regex( 'upload date', default=None))
r"<div class='title'>Views\s*:</div>\s*<div class='value'>([^<>]+)<", view_count = str_to_int(self._search_regex(
page, 'view count', fatal=False)) r'role=["\']viewCnt["\'][^>]*>([\d,.]+) views',
webpage, 'view count', default=None))
description = self._html_search_meta(
'description', webpage, default=None) or clean_html(
oembed_obj.get('description'))
return { return {
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'title': oembed_obj['title'], 'title': title,
'thumbnail': thumb_url, 'thumbnail': thumb_url,
'description': self._html_search_meta('description', page), 'description': description,
'creator': oembed_obj['author_name'], 'creator': oembed_obj.get('author_name'),
'duration': oembed_obj['duration'], 'duration': parse_duration(oembed_obj.get('duration')),
'timestamp': timestamp, 'upload_date': upload_date,
'view_count': view_count, 'view_count': view_count,
} }

View File

@@ -4,9 +4,11 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
js_to_json, js_to_json,
smuggle_url, smuggle_url,
try_get,
) )
@@ -27,7 +29,20 @@ class CBCIE(InfoExtractor):
}, },
'skip': 'Geo-restricted to Canada', 'skip': 'Geo-restricted to Canada',
}, { }, {
# with clipId # with clipId, feed available via tpfeed.cbc.ca and feed.theplatform.com
'url': 'http://www.cbc.ca/22minutes/videos/22-minutes-update/22-minutes-update-episode-4',
'md5': '162adfa070274b144f4fdc3c3b8207db',
'info_dict': {
'id': '2414435309',
'ext': 'mp4',
'title': '22 Minutes Update: What Not To Wear Quebec',
'description': "This week's latest Canadian top political story is What Not To Wear Quebec.",
'upload_date': '20131025',
'uploader': 'CBCC-NEW',
'timestamp': 1382717907,
},
}, {
# with clipId, feed only available via tpfeed.cbc.ca
'url': 'http://www.cbc.ca/archives/entry/1978-robin-williams-freestyles-on-90-minutes-live', 'url': 'http://www.cbc.ca/archives/entry/1978-robin-williams-freestyles-on-90-minutes-live',
'md5': '0274a90b51a9b4971fe005c63f592f12', 'md5': '0274a90b51a9b4971fe005c63f592f12',
'info_dict': { 'info_dict': {
@@ -83,9 +98,15 @@ class CBCIE(InfoExtractor):
media_id = player_info.get('mediaId') media_id = player_info.get('mediaId')
if not media_id: if not media_id:
clip_id = player_info['clipId'] clip_id = player_info['clipId']
media_id = self._download_json( feed = self._download_json(
'http://feed.theplatform.com/f/h9dtGB/punlNGjMlc1F?fields=id&byContent=byReleases%3DbyId%253D' + clip_id, 'http://tpfeed.cbc.ca/f/ExhSPC/vms_5akSXx4Ng_Zn?byCustomValue={:mpsReleases}{%s}' % clip_id,
clip_id)['entries'][0]['id'].split('/')[-1] clip_id, fatal=False)
if feed:
media_id = try_get(feed, lambda x: x['entries'][0]['guid'], compat_str)
if not media_id:
media_id = self._download_json(
'http://feed.theplatform.com/f/h9dtGB/punlNGjMlc1F?fields=id&byContent=byReleases%3DbyId%253D' + clip_id,
clip_id)['entries'][0]['id'].split('/')[-1]
return self.url_result('cbcplayer:%s' % media_id, 'CBCPlayer', media_id) return self.url_result('cbcplayer:%s' % media_id, 'CBCPlayer', media_id)
else: else:
entries = [self.url_result('cbcplayer:%s' % media_id, 'CBCPlayer', media_id) for media_id in re.findall(r'<iframe[^>]+src="[^"]+?mediaId=(\d+)"', webpage)] entries = [self.url_result('cbcplayer:%s' % media_id, 'CBCPlayer', media_id) for media_id in re.findall(r'<iframe[^>]+src="[^"]+?mediaId=(\d+)"', webpage)]

View File

@@ -1,12 +1,10 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import calendar
import datetime
from .anvato import AnvatoIE from .anvato import AnvatoIE
from .sendtonews import SendtoNewsIE from .sendtonews import SendtoNewsIE
from ..compat import compat_urlparse from ..compat import compat_urlparse
from ..utils import unified_timestamp
class CBSLocalIE(AnvatoIE): class CBSLocalIE(AnvatoIE):
@@ -43,13 +41,8 @@ class CBSLocalIE(AnvatoIE):
'url': 'http://cleveland.cbslocal.com/2016/05/16/indians-score-season-high-15-runs-in-blowout-win-over-reds-rapid-reaction/', 'url': 'http://cleveland.cbslocal.com/2016/05/16/indians-score-season-high-15-runs-in-blowout-win-over-reds-rapid-reaction/',
'info_dict': { 'info_dict': {
'id': 'GxfCe0Zo7D-175909-5588', 'id': 'GxfCe0Zo7D-175909-5588',
'ext': 'mp4',
'title': 'Recap: CLE 15, CIN 6',
'description': '5/16/16: Indians\' bats explode for 15 runs in a win',
'upload_date': '20160516',
'timestamp': 1463433840,
'duration': 49,
}, },
'playlist_count': 9,
'params': { 'params': {
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
@@ -62,19 +55,15 @@ class CBSLocalIE(AnvatoIE):
sendtonews_url = SendtoNewsIE._extract_url(webpage) sendtonews_url = SendtoNewsIE._extract_url(webpage)
if sendtonews_url: if sendtonews_url:
info_dict = { return self.url_result(
'_type': 'url_transparent', compat_urlparse.urljoin(url, sendtonews_url),
'url': compat_urlparse.urljoin(url, sendtonews_url), ie=SendtoNewsIE.ie_key())
}
else: info_dict = self._extract_anvato_videos(webpage, display_id)
info_dict = self._extract_anvato_videos(webpage, display_id)
time_str = self._html_search_regex( time_str = self._html_search_regex(
r'class="entry-date">([^<]+)<', webpage, 'released date', fatal=False) r'class="entry-date">([^<]+)<', webpage, 'released date', fatal=False)
timestamp = None timestamp = unified_timestamp(time_str)
if time_str:
timestamp = calendar.timegm(datetime.datetime.strptime(
time_str, '%b %d, %Y %I:%M %p').timetuple())
info_dict.update({ info_dict.update({
'display_id': display_id, 'display_id': display_id,

View File

@@ -70,7 +70,8 @@ class CBSNewsLiveVideoIE(InfoExtractor):
IE_DESC = 'CBS News Live Videos' IE_DESC = 'CBS News Live Videos'
_VALID_URL = r'https?://(?:www\.)?cbsnews\.com/live/video/(?P<id>[\da-z_-]+)' _VALID_URL = r'https?://(?:www\.)?cbsnews\.com/live/video/(?P<id>[\da-z_-]+)'
_TESTS = [{ # Live videos get deleted soon. See http://www.cbsnews.com/live/ for the latest examples
_TEST = {
'url': 'http://www.cbsnews.com/live/video/clinton-sanders-prepare-to-face-off-in-nh/', 'url': 'http://www.cbsnews.com/live/video/clinton-sanders-prepare-to-face-off-in-nh/',
'info_dict': { 'info_dict': {
'id': 'clinton-sanders-prepare-to-face-off-in-nh', 'id': 'clinton-sanders-prepare-to-face-off-in-nh',
@@ -78,15 +79,8 @@ class CBSNewsLiveVideoIE(InfoExtractor):
'title': 'Clinton, Sanders Prepare To Face Off In NH', 'title': 'Clinton, Sanders Prepare To Face Off In NH',
'duration': 334, 'duration': 334,
}, },
'skip': 'Video gone, redirected to http://www.cbsnews.com/live/', 'skip': 'Video gone',
}, { }
'url': 'http://www.cbsnews.com/live/video/video-shows-intense-paragliding-accident/',
'info_dict': {
'id': 'video-shows-intense-paragliding-accident',
'ext': 'flv',
'title': 'Video Shows Intense Paragliding Accident',
},
}]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)

View File

@@ -17,7 +17,8 @@ class ChaturbateIE(InfoExtractor):
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
} },
'skip': 'Room is offline',
}, { }, {
'url': 'https://en.chaturbate.com/siswet19/', 'url': 'https://en.chaturbate.com/siswet19/',
'only_matching': True, 'only_matching': True,

View File

@@ -1,30 +1,33 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import base64
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import parse_duration
parse_duration,
int_or_none,
)
class ChirbitIE(InfoExtractor): class ChirbitIE(InfoExtractor):
IE_NAME = 'chirbit' IE_NAME = 'chirbit'
_VALID_URL = r'https?://(?:www\.)?chirb\.it/(?:(?:wp|pl)/|fb_chirbit_player\.swf\?key=)?(?P<id>[\da-zA-Z]+)' _VALID_URL = r'https?://(?:www\.)?chirb\.it/(?:(?:wp|pl)/|fb_chirbit_player\.swf\?key=)?(?P<id>[\da-zA-Z]+)'
_TESTS = [{ _TESTS = [{
'url': 'http://chirb.it/PrIPv5', 'url': 'http://chirb.it/be2abG',
'md5': '9847b0dad6ac3e074568bf2cfb197de8',
'info_dict': { 'info_dict': {
'id': 'PrIPv5', 'id': 'be2abG',
'ext': 'mp3', 'ext': 'mp3',
'title': 'Фасадстрой', 'title': 'md5:f542ea253f5255240be4da375c6a5d7e',
'duration': 52, 'description': 'md5:f24a4e22a71763e32da5fed59e47c770',
'view_count': int, 'duration': 306,
'comment_count': int, },
'params': {
'skip_download': True,
} }
}, { }, {
'url': 'https://chirb.it/fb_chirbit_player.swf?key=PrIPv5', 'url': 'https://chirb.it/fb_chirbit_player.swf?key=PrIPv5',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://chirb.it/wp/MN58c2',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -33,27 +36,30 @@ class ChirbitIE(InfoExtractor):
webpage = self._download_webpage( webpage = self._download_webpage(
'http://chirb.it/%s' % audio_id, audio_id) 'http://chirb.it/%s' % audio_id, audio_id)
audio_url = self._search_regex( data_fd = self._search_regex(
r'"setFile"\s*,\s*"([^"]+)"', webpage, 'audio url') r'data-fd=(["\'])(?P<url>(?:(?!\1).)+)\1',
webpage, 'data fd', group='url')
# Reverse engineered from https://chirb.it/js/chirbit.player.js (look
# for soundURL)
audio_url = base64.b64decode(
data_fd[::-1].encode('ascii')).decode('utf-8')
title = self._search_regex( title = self._search_regex(
r'itemprop="name">([^<]+)', webpage, 'title') r'class=["\']chirbit-title["\'][^>]*>([^<]+)', webpage, 'title')
duration = parse_duration(self._html_search_meta( description = self._search_regex(
'duration', webpage, 'duration', fatal=False)) r'<h3>Description</h3>\s*<pre[^>]*>([^<]+)</pre>',
view_count = int_or_none(self._search_regex( webpage, 'description', default=None)
r'itemprop="playCount"\s*>(\d+)', webpage, duration = parse_duration(self._search_regex(
'listen count', fatal=False)) r'class=["\']c-length["\'][^>]*>([^<]+)',
comment_count = int_or_none(self._search_regex( webpage, 'duration', fatal=False))
r'>(\d+) Comments?:', webpage,
'comment count', fatal=False))
return { return {
'id': audio_id, 'id': audio_id,
'url': audio_url, 'url': audio_url,
'title': title, 'title': title,
'description': description,
'duration': duration, 'duration': duration,
'view_count': view_count,
'comment_count': comment_count,
} }

View File

@@ -1,5 +1,7 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from .mtv import MTVIE from .mtv import MTVIE
from ..utils import ExtractorError
class CMTIE(MTVIE): class CMTIE(MTVIE):
@@ -16,7 +18,27 @@ class CMTIE(MTVIE):
'title': 'Garth Brooks - "The Call (featuring Trisha Yearwood)"', 'title': 'Garth Brooks - "The Call (featuring Trisha Yearwood)"',
'description': 'Blame It All On My Roots', 'description': 'Blame It All On My Roots',
}, },
'skip': 'Video not available',
}, {
'url': 'http://www.cmt.com/videos/misc/1504699/still-the-king-ep-109-in-3-minutes.jhtml#id=1739908',
'md5': 'e61a801ca4a183a466c08bd98dccbb1c',
'info_dict': {
'id': '1504699',
'ext': 'mp4',
'title': 'Still The King Ep. 109 in 3 Minutes',
'description': 'Relive or catch up with Still The King by watching this recap of season 1, episode 9. New episodes Sundays 9/8c.',
'timestamp': 1469421000.0,
'upload_date': '20160725',
},
}, { }, {
'url': 'http://www.cmt.com/shows/party-down-south/party-down-south-ep-407-gone-girl/1738172/playlist/#id=1738172', 'url': 'http://www.cmt.com/shows/party-down-south/party-down-south-ep-407-gone-girl/1738172/playlist/#id=1738172',
'only_matching': True, 'only_matching': True,
}] }]
@classmethod
def _transform_rtmp_url(cls, rtmp_video_url):
if 'error_not_available.swf' in rtmp_video_url:
raise ExtractorError(
'%s said: video is not available' % cls.IE_NAME, expected=True)
return super(CMTIE, cls)._transform_rtmp_url(rtmp_video_url)

View File

@@ -1,17 +1,7 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .mtv import MTVServicesInfoExtractor from .mtv import MTVServicesInfoExtractor
from ..compat import ( from .common import InfoExtractor
compat_str,
compat_urllib_parse_urlencode,
)
from ..utils import (
ExtractorError,
float_or_none,
unified_strdate,
)
class ComedyCentralIE(MTVServicesInfoExtractor): class ComedyCentralIE(MTVServicesInfoExtractor):
@@ -26,8 +16,10 @@ class ComedyCentralIE(MTVServicesInfoExtractor):
'info_dict': { 'info_dict': {
'id': 'cef0cbb3-e776-4bc9-b62e-8016deccb354', 'id': 'cef0cbb3-e776-4bc9-b62e-8016deccb354',
'ext': 'mp4', 'ext': 'mp4',
'title': 'CC:Stand-Up|Greg Fitzsimmons: Life on Stage|Uncensored - Too Good of a Mother', 'title': 'CC:Stand-Up|August 18, 2013|1|0101|Uncensored - Too Good of a Mother',
'description': 'After a certain point, breastfeeding becomes c**kblocking.', 'description': 'After a certain point, breastfeeding becomes c**kblocking.',
'timestamp': 1376798400,
'upload_date': '20130818',
}, },
}, { }, {
'url': 'http://www.cc.com/shows/the-daily-show-with-trevor-noah/interviews/6yx39d/exclusive-rand-paul-extended-interview', 'url': 'http://www.cc.com/shows/the-daily-show-with-trevor-noah/interviews/6yx39d/exclusive-rand-paul-extended-interview',
@@ -35,244 +27,43 @@ class ComedyCentralIE(MTVServicesInfoExtractor):
}] }]
class ComedyCentralShowsIE(MTVServicesInfoExtractor): class ToshIE(MTVServicesInfoExtractor):
IE_DESC = 'The Daily Show / The Colbert Report' IE_DESC = 'Tosh.0'
# urls can be abbreviations like :thedailyshow _VALID_URL = r'^https?://tosh\.cc\.com/video-(?:clips|collections)/[^/]+/(?P<videotitle>[^/?#]+)'
# urls for episodes like: _FEED_URL = 'http://tosh.cc.com/feeds/mrss'
# or urls for clips like: http://www.thedailyshow.com/watch/mon-december-10-2012/any-given-gun-day
# or: http://www.colbertnation.com/the-colbert-report-videos/421667/november-29-2012/moon-shattering-news
# or: http://www.colbertnation.com/the-colbert-report-collections/422008/festival-of-lights/79524
_VALID_URL = r'''(?x)^(:(?P<shortname>tds|thedailyshow)
|https?://(:www\.)?
(?P<showname>thedailyshow|thecolbertreport|tosh)\.(?:cc\.)?com/
((?:full-)?episodes/(?:[0-9a-z]{6}/)?(?P<episode>.*)|
(?P<clip>
(?:(?:guests/[^/]+|videos|video-(?:clips|playlists)|special-editions|news-team/[^/]+)/[^/]+/(?P<videotitle>[^/?#]+))
|(the-colbert-report-(videos|collections)/(?P<clipID>[0-9]+)/[^/]*/(?P<cntitle>.*?))
|(watch/(?P<date>[^/]*)/(?P<tdstitle>.*))
)|
(?P<interview>
extended-interviews/(?P<interID>[0-9a-z]+)/
(?:playlist_tds_extended_)?(?P<interview_title>[^/?#]*?)
(?:/[^/?#]?|[?#]|$))))
'''
_TESTS = [{ _TESTS = [{
'url': 'http://thedailyshow.cc.com/watch/thu-december-13-2012/kristen-stewart',
'md5': '4e2f5cb088a83cd8cdb7756132f9739d',
'info_dict': {
'id': 'ab9ab3e7-5a98-4dbe-8b21-551dc0523d55',
'ext': 'mp4',
'upload_date': '20121213',
'description': 'Kristen Stewart learns to let loose in "On the Road."',
'uploader': 'thedailyshow',
'title': 'thedailyshow kristen-stewart part 1',
}
}, {
'url': 'http://thedailyshow.cc.com/extended-interviews/b6364d/sarah-chayes-extended-interview',
'info_dict': {
'id': 'sarah-chayes-extended-interview',
'description': 'Carnegie Endowment Senior Associate Sarah Chayes discusses how corrupt institutions function throughout the world in her book "Thieves of State: Why Corruption Threatens Global Security."',
'title': 'thedailyshow Sarah Chayes Extended Interview',
},
'playlist': [
{
'info_dict': {
'id': '0baad492-cbec-4ec1-9e50-ad91c291127f',
'ext': 'mp4',
'upload_date': '20150129',
'description': 'Carnegie Endowment Senior Associate Sarah Chayes discusses how corrupt institutions function throughout the world in her book "Thieves of State: Why Corruption Threatens Global Security."',
'uploader': 'thedailyshow',
'title': 'thedailyshow sarah-chayes-extended-interview part 1',
},
},
{
'info_dict': {
'id': '1e4fb91b-8ce7-4277-bd7c-98c9f1bbd283',
'ext': 'mp4',
'upload_date': '20150129',
'description': 'Carnegie Endowment Senior Associate Sarah Chayes discusses how corrupt institutions function throughout the world in her book "Thieves of State: Why Corruption Threatens Global Security."',
'uploader': 'thedailyshow',
'title': 'thedailyshow sarah-chayes-extended-interview part 2',
},
},
],
'params': {
'skip_download': True,
},
}, {
'url': 'http://thedailyshow.cc.com/extended-interviews/xm3fnq/andrew-napolitano-extended-interview',
'only_matching': True,
}, {
'url': 'http://thecolbertreport.cc.com/videos/29w6fx/-realhumanpraise-for-fox-news',
'only_matching': True,
}, {
'url': 'http://thecolbertreport.cc.com/videos/gh6urb/neil-degrasse-tyson-pt--1?xrs=eml_col_031114',
'only_matching': True,
}, {
'url': 'http://thedailyshow.cc.com/guests/michael-lewis/3efna8/exclusive---michael-lewis-extended-interview-pt--3',
'only_matching': True,
}, {
'url': 'http://thedailyshow.cc.com/episodes/sy7yv0/april-8--2014---denis-leary',
'only_matching': True,
}, {
'url': 'http://thecolbertreport.cc.com/episodes/8ase07/april-8--2014---jane-goodall',
'only_matching': True,
}, {
'url': 'http://thedailyshow.cc.com/video-playlists/npde3s/the-daily-show-19088-highlights',
'only_matching': True,
}, {
'url': 'http://thedailyshow.cc.com/video-playlists/t6d9sg/the-daily-show-20038-highlights/be3cwo',
'only_matching': True,
}, {
'url': 'http://thedailyshow.cc.com/special-editions/2l8fdb/special-edition---a-look-back-at-food',
'only_matching': True,
}, {
'url': 'http://thedailyshow.cc.com/news-team/michael-che/7wnfel/we-need-to-talk-about-israel',
'only_matching': True,
}, {
'url': 'http://tosh.cc.com/video-clips/68g93d/twitter-users-share-summer-plans', 'url': 'http://tosh.cc.com/video-clips/68g93d/twitter-users-share-summer-plans',
'info_dict': {
'description': 'Tosh asked fans to share their summer plans.',
'title': 'Twitter Users Share Summer Plans',
},
'playlist': [{
'md5': 'f269e88114c1805bb6d7653fecea9e06',
'info_dict': {
'id': '90498ec2-ed00-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'title': 'Tosh.0|June 9, 2077|2|211|Twitter Users Share Summer Plans',
'description': 'Tosh asked fans to share their summer plans.',
'thumbnail': 're:^https?://.*\.jpg',
# It's really reported to be published on year 2077
'upload_date': '20770610',
'timestamp': 3390510600,
'subtitles': {
'en': 'mincount:3',
},
},
}]
}, {
'url': 'http://tosh.cc.com/video-collections/x2iz7k/just-plain-foul/m5q4fp',
'only_matching': True, 'only_matching': True,
}] }]
_available_formats = ['3500', '2200', '1700', '1200', '750', '400'] @classmethod
def _transform_rtmp_url(cls, rtmp_video_url):
_video_extensions = { new_urls = super(ToshIE, cls)._transform_rtmp_url(rtmp_video_url)
'3500': 'mp4', new_urls['rtmp'] = rtmp_video_url.replace('viacomccstrm', 'viacommtvstrm')
'2200': 'mp4', return new_urls
'1700': 'mp4',
'1200': 'mp4',
'750': 'mp4',
'400': 'mp4',
}
_video_dimensions = {
'3500': (1280, 720),
'2200': (960, 540),
'1700': (768, 432),
'1200': (640, 360),
'750': (512, 288),
'400': (384, 216),
}
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
if mobj.group('shortname'):
return self.url_result('http://www.cc.com/shows/the-daily-show-with-trevor-noah/full-episodes')
if mobj.group('clip'):
if mobj.group('videotitle'):
epTitle = mobj.group('videotitle')
elif mobj.group('showname') == 'thedailyshow':
epTitle = mobj.group('tdstitle')
else:
epTitle = mobj.group('cntitle')
dlNewest = False
elif mobj.group('interview'):
epTitle = mobj.group('interview_title')
dlNewest = False
else:
dlNewest = not mobj.group('episode')
if dlNewest:
epTitle = mobj.group('showname')
else:
epTitle = mobj.group('episode')
show_name = mobj.group('showname')
webpage, htmlHandle = self._download_webpage_handle(url, epTitle)
if dlNewest:
url = htmlHandle.geturl()
mobj = re.match(self._VALID_URL, url, re.VERBOSE)
if mobj is None:
raise ExtractorError('Invalid redirected URL: ' + url)
if mobj.group('episode') == '':
raise ExtractorError('Redirected URL is still not specific: ' + url)
epTitle = (mobj.group('episode') or mobj.group('videotitle')).rpartition('/')[-1]
mMovieParams = re.findall('(?:<param name="movie" value="|var url = ")(http://media.mtvnservices.com/([^"]*(?:episode|video).*?:.*?))"', webpage)
if len(mMovieParams) == 0:
# The Colbert Report embeds the information in a without
# a URL prefix; so extract the alternate reference
# and then add the URL prefix manually.
altMovieParams = re.findall('data-mgid="([^"]*(?:episode|video|playlist).*?:.*?)"', webpage)
if len(altMovieParams) == 0:
raise ExtractorError('unable to find Flash URL in webpage ' + url)
else:
mMovieParams = [('http://media.mtvnservices.com/' + altMovieParams[0], altMovieParams[0])]
uri = mMovieParams[0][1]
# Correct cc.com in uri
uri = re.sub(r'(episode:[^.]+)(\.cc)?\.com', r'\1.com', uri)
index_url = 'http://%s.cc.com/feeds/mrss?%s' % (show_name, compat_urllib_parse_urlencode({'uri': uri}))
idoc = self._download_xml(
index_url, epTitle,
'Downloading show index', 'Unable to download episode index')
title = idoc.find('./channel/title').text
description = idoc.find('./channel/description').text
entries = []
item_els = idoc.findall('.//item')
for part_num, itemEl in enumerate(item_els):
upload_date = unified_strdate(itemEl.findall('./pubDate')[0].text)
thumbnail = itemEl.find('.//{http://search.yahoo.com/mrss/}thumbnail').attrib.get('url')
content = itemEl.find('.//{http://search.yahoo.com/mrss/}content')
duration = float_or_none(content.attrib.get('duration'))
mediagen_url = content.attrib['url']
guid = itemEl.find('./guid').text.rpartition(':')[-1]
cdoc = self._download_xml(
mediagen_url, epTitle,
'Downloading configuration for segment %d / %d' % (part_num + 1, len(item_els)))
turls = []
for rendition in cdoc.findall('.//rendition'):
finfo = (rendition.attrib['bitrate'], rendition.findall('./src')[0].text)
turls.append(finfo)
formats = []
for format, rtmp_video_url in turls:
w, h = self._video_dimensions.get(format, (None, None))
formats.append({
'format_id': 'vhttp-%s' % format,
'url': self._transform_rtmp_url(rtmp_video_url),
'ext': self._video_extensions.get(format, 'mp4'),
'height': h,
'width': w,
})
formats.append({
'format_id': 'rtmp-%s' % format,
'url': rtmp_video_url.replace('viacomccstrm', 'viacommtvstrm'),
'ext': self._video_extensions.get(format, 'mp4'),
'height': h,
'width': w,
})
self._sort_formats(formats)
subtitles = self._extract_subtitles(cdoc, guid)
virtual_id = show_name + ' ' + epTitle + ' part ' + compat_str(part_num + 1)
entries.append({
'id': guid,
'title': virtual_id,
'formats': formats,
'uploader': show_name,
'upload_date': upload_date,
'duration': duration,
'thumbnail': thumbnail,
'description': description,
'subtitles': subtitles,
})
return {
'_type': 'playlist',
'id': epTitle,
'entries': entries,
'title': show_name + ' ' + title,
'description': description,
}
class ComedyCentralTVIE(MTVServicesInfoExtractor): class ComedyCentralTVIE(MTVServicesInfoExtractor):
@@ -306,3 +97,22 @@ class ComedyCentralTVIE(MTVServicesInfoExtractor):
webpage, 'mrss url', group='url') webpage, 'mrss url', group='url')
return self._get_videos_info_from_url(mrss_url, video_id) return self._get_videos_info_from_url(mrss_url, video_id)
class ComedyCentralShortnameIE(InfoExtractor):
_VALID_URL = r'^:(?P<id>tds|thedailyshow)$'
_TESTS = [{
'url': ':tds',
'only_matching': True,
}, {
'url': ':thedailyshow',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
shortcut_map = {
'tds': 'http://www.cc.com/shows/the-daily-show-with-trevor-noah/full-episodes',
'thedailyshow': 'http://www.cc.com/shows/the-daily-show-with-trevor-noah/full-episodes',
}
return self.url_result(shortcut_map[video_id])

View File

@@ -662,6 +662,24 @@ class InfoExtractor(object):
else: else:
return res return res
def _get_netrc_login_info(self, netrc_machine=None):
username = None
password = None
netrc_machine = netrc_machine or self._NETRC_MACHINE
if self._downloader.params.get('usenetrc', False):
try:
info = netrc.netrc().authenticators(netrc_machine)
if info is not None:
username = info[0]
password = info[2]
else:
raise netrc.NetrcParseError('No authenticators for %s' % netrc_machine)
except (IOError, netrc.NetrcParseError) as err:
self._downloader.report_warning('parsing .netrc: %s' % error_to_compat_str(err))
return (username, password)
def _get_login_info(self): def _get_login_info(self):
""" """
Get the login info as (username, password) Get the login info as (username, password)
@@ -679,16 +697,8 @@ class InfoExtractor(object):
if downloader_params.get('username') is not None: if downloader_params.get('username') is not None:
username = downloader_params['username'] username = downloader_params['username']
password = downloader_params['password'] password = downloader_params['password']
elif downloader_params.get('usenetrc', False): else:
try: username, password = self._get_netrc_login_info()
info = netrc.netrc().authenticators(self._NETRC_MACHINE)
if info is not None:
username = info[0]
password = info[2]
else:
raise netrc.NetrcParseError('No authenticators for %s' % self._NETRC_MACHINE)
except (IOError, netrc.NetrcParseError) as err:
self._downloader.report_warning('parsing .netrc: %s' % error_to_compat_str(err))
return (username, password) return (username, password)
@@ -727,9 +737,14 @@ class InfoExtractor(object):
[^>]+?content=(["\'])(?P<content>.*?)\2''' % re.escape(prop) [^>]+?content=(["\'])(?P<content>.*?)\2''' % re.escape(prop)
def _og_search_property(self, prop, html, name=None, **kargs): def _og_search_property(self, prop, html, name=None, **kargs):
if not isinstance(prop, (list, tuple)):
prop = [prop]
if name is None: if name is None:
name = 'OpenGraph %s' % prop name = 'OpenGraph %s' % prop[0]
escaped = self._search_regex(self._og_regexes(prop), html, name, flags=re.DOTALL, **kargs) og_regexes = []
for p in prop:
og_regexes.extend(self._og_regexes(p))
escaped = self._search_regex(og_regexes, html, name, flags=re.DOTALL, **kargs)
if escaped is None: if escaped is None:
return None return None
return unescapeHTML(escaped) return unescapeHTML(escaped)
@@ -811,11 +826,14 @@ class InfoExtractor(object):
json_ld = self._search_regex( json_ld = self._search_regex(
r'(?s)<script[^>]+type=(["\'])application/ld\+json\1[^>]*>(?P<json_ld>.+?)</script>', r'(?s)<script[^>]+type=(["\'])application/ld\+json\1[^>]*>(?P<json_ld>.+?)</script>',
html, 'JSON-LD', group='json_ld', **kwargs) html, 'JSON-LD', group='json_ld', **kwargs)
default = kwargs.get('default', NO_DEFAULT)
if not json_ld: if not json_ld:
return {} return default if default is not NO_DEFAULT else {}
return self._json_ld( # JSON-LD may be malformed and thus `fatal` should be respected.
json_ld, video_id, fatal=kwargs.get('fatal', True), # At the same time `default` may be passed that assumes `fatal=False`
expected_type=expected_type) # for _search_regex. Let's simulate the same behavior here as well.
fatal = kwargs.get('fatal', True) if default == NO_DEFAULT else False
return self._json_ld(json_ld, video_id, fatal=fatal, expected_type=expected_type)
def _json_ld(self, json_ld, video_id, fatal=True, expected_type=None): def _json_ld(self, json_ld, video_id, fatal=True, expected_type=None):
if isinstance(json_ld, compat_str): if isinstance(json_ld, compat_str):
@@ -823,41 +841,47 @@ class InfoExtractor(object):
if not json_ld: if not json_ld:
return {} return {}
info = {} info = {}
if json_ld.get('@context') == 'http://schema.org': if not isinstance(json_ld, (list, tuple, dict)):
item_type = json_ld.get('@type') return info
if expected_type is not None and expected_type != item_type: if isinstance(json_ld, dict):
return info json_ld = [json_ld]
if item_type == 'TVEpisode': for e in json_ld:
info.update({ if e.get('@context') == 'http://schema.org':
'episode': unescapeHTML(json_ld.get('name')), item_type = e.get('@type')
'episode_number': int_or_none(json_ld.get('episodeNumber')), if expected_type is not None and expected_type != item_type:
'description': unescapeHTML(json_ld.get('description')), return info
}) if item_type == 'TVEpisode':
part_of_season = json_ld.get('partOfSeason') info.update({
if isinstance(part_of_season, dict) and part_of_season.get('@type') == 'TVSeason': 'episode': unescapeHTML(e.get('name')),
info['season_number'] = int_or_none(part_of_season.get('seasonNumber')) 'episode_number': int_or_none(e.get('episodeNumber')),
part_of_series = json_ld.get('partOfSeries') 'description': unescapeHTML(e.get('description')),
if isinstance(part_of_series, dict) and part_of_series.get('@type') == 'TVSeries': })
info['series'] = unescapeHTML(part_of_series.get('name')) part_of_season = e.get('partOfSeason')
elif item_type == 'Article': if isinstance(part_of_season, dict) and part_of_season.get('@type') == 'TVSeason':
info.update({ info['season_number'] = int_or_none(part_of_season.get('seasonNumber'))
'timestamp': parse_iso8601(json_ld.get('datePublished')), part_of_series = e.get('partOfSeries') or e.get('partOfTVSeries')
'title': unescapeHTML(json_ld.get('headline')), if isinstance(part_of_series, dict) and part_of_series.get('@type') == 'TVSeries':
'description': unescapeHTML(json_ld.get('articleBody')), info['series'] = unescapeHTML(part_of_series.get('name'))
}) elif item_type == 'Article':
elif item_type == 'VideoObject': info.update({
info.update({ 'timestamp': parse_iso8601(e.get('datePublished')),
'url': json_ld.get('contentUrl'), 'title': unescapeHTML(e.get('headline')),
'title': unescapeHTML(json_ld.get('name')), 'description': unescapeHTML(e.get('articleBody')),
'description': unescapeHTML(json_ld.get('description')), })
'thumbnail': json_ld.get('thumbnailUrl'), elif item_type == 'VideoObject':
'duration': parse_duration(json_ld.get('duration')), info.update({
'timestamp': unified_timestamp(json_ld.get('uploadDate')), 'url': e.get('contentUrl'),
'filesize': float_or_none(json_ld.get('contentSize')), 'title': unescapeHTML(e.get('name')),
'tbr': int_or_none(json_ld.get('bitrate')), 'description': unescapeHTML(e.get('description')),
'width': int_or_none(json_ld.get('width')), 'thumbnail': e.get('thumbnailUrl'),
'height': int_or_none(json_ld.get('height')), 'duration': parse_duration(e.get('duration')),
}) 'timestamp': unified_timestamp(e.get('uploadDate')),
'filesize': float_or_none(e.get('contentSize')),
'tbr': int_or_none(e.get('bitrate')),
'width': int_or_none(e.get('width')),
'height': int_or_none(e.get('height')),
})
break
return dict((k, v) for k, v in info.items() if v is not None) return dict((k, v) for k, v in info.items() if v is not None)
@staticmethod @staticmethod
@@ -911,7 +935,8 @@ class InfoExtractor(object):
if f.get('ext') in ['f4f', 'f4m']: # Not yet supported if f.get('ext') in ['f4f', 'f4m']: # Not yet supported
preference -= 0.5 preference -= 0.5
proto_preference = 0 if determine_protocol(f) in ['http', 'https'] else -0.1 protocol = f.get('protocol') or determine_protocol(f)
proto_preference = 0 if protocol in ['http', 'https'] else (-0.5 if protocol == 'rtsp' else -0.1)
if f.get('vcodec') == 'none': # audio only if f.get('vcodec') == 'none': # audio only
preference -= 50 preference -= 50
@@ -1128,7 +1153,7 @@ class InfoExtractor(object):
'url': m3u8_url, 'url': m3u8_url,
'ext': ext, 'ext': ext,
'protocol': 'm3u8', 'protocol': 'm3u8',
'preference': preference - 1 if preference else -1, 'preference': preference - 100 if preference else -100,
'resolution': 'multiple', 'resolution': 'multiple',
'format_note': 'Quality selection URL', 'format_note': 'Quality selection URL',
} }
@@ -1786,7 +1811,7 @@ class InfoExtractor(object):
any_restricted = False any_restricted = False
for tc in self.get_testcases(include_onlymatching=False): for tc in self.get_testcases(include_onlymatching=False):
if 'playlist' in tc: if tc.get('playlist', []):
tc = tc['playlist'][0] tc = tc['playlist'][0]
is_restricted = age_restricted( is_restricted = age_restricted(
tc.get('info_dict', {}).get('age_limit'), age_limit) tc.get('info_dict', {}).get('age_limit'), age_limit)

View File

@@ -5,13 +5,17 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import ( from ..compat import (
compat_urllib_parse_urlencode,
compat_urllib_parse_urlparse, compat_urllib_parse_urlparse,
compat_urlparse, compat_urlparse,
) )
from ..utils import ( from ..utils import (
orderedSet, orderedSet,
remove_end, remove_end,
extract_attributes,
mimetype2ext,
determine_ext,
int_or_none,
parse_iso8601,
) )
@@ -58,6 +62,9 @@ class CondeNastIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': '3D Printed Speakers Lit With LED', 'title': '3D Printed Speakers Lit With LED',
'description': 'Check out these beautiful 3D printed LED speakers. You can\'t actually buy them, but LumiGeek is working on a board that will let you make you\'re own.', 'description': 'Check out these beautiful 3D printed LED speakers. You can\'t actually buy them, but LumiGeek is working on a board that will let you make you\'re own.',
'uploader': 'wired',
'upload_date': '20130314',
'timestamp': 1363219200,
} }
}, { }, {
# JS embed # JS embed
@@ -67,70 +74,93 @@ class CondeNastIE(InfoExtractor):
'id': '55f9cf8b61646d1acf00000c', 'id': '55f9cf8b61646d1acf00000c',
'ext': 'mp4', 'ext': 'mp4',
'title': '3D printed TSA Travel Sentry keys really do open TSA locks', 'title': '3D printed TSA Travel Sentry keys really do open TSA locks',
'uploader': 'arstechnica',
'upload_date': '20150916',
'timestamp': 1442434955,
} }
}] }]
def _extract_series(self, url, webpage): def _extract_series(self, url, webpage):
title = self._html_search_regex(r'<div class="cne-series-info">.*?<h1>(.+?)</h1>', title = self._html_search_regex(
webpage, 'series title', flags=re.DOTALL) r'(?s)<div class="cne-series-info">.*?<h1>(.+?)</h1>',
webpage, 'series title')
url_object = compat_urllib_parse_urlparse(url) url_object = compat_urllib_parse_urlparse(url)
base_url = '%s://%s' % (url_object.scheme, url_object.netloc) base_url = '%s://%s' % (url_object.scheme, url_object.netloc)
m_paths = re.finditer(r'<p class="cne-thumb-title">.*?<a href="(/watch/.+?)["\?]', m_paths = re.finditer(
webpage, flags=re.DOTALL) r'(?s)<p class="cne-thumb-title">.*?<a href="(/watch/.+?)["\?]', webpage)
paths = orderedSet(m.group(1) for m in m_paths) paths = orderedSet(m.group(1) for m in m_paths)
build_url = lambda path: compat_urlparse.urljoin(base_url, path) build_url = lambda path: compat_urlparse.urljoin(base_url, path)
entries = [self.url_result(build_url(path), 'CondeNast') for path in paths] entries = [self.url_result(build_url(path), 'CondeNast') for path in paths]
return self.playlist_result(entries, playlist_title=title) return self.playlist_result(entries, playlist_title=title)
def _extract_video(self, webpage, url_type): def _extract_video(self, webpage, url_type):
if url_type != 'embed': query = {}
description = self._html_search_regex( params = self._search_regex(
[ r'(?s)var params = {(.+?)}[;,]', webpage, 'player params', default=None)
r'<div class="cne-video-description">(.+?)</div>', if params:
r'<div class="video-post-content">(.+?)</div>', query.update({
], 'videoId': self._search_regex(r'videoId: [\'"](.+?)[\'"]', params, 'video id'),
webpage, 'description', fatal=False, flags=re.DOTALL) 'playerId': self._search_regex(r'playerId: [\'"](.+?)[\'"]', params, 'player id'),
'target': self._search_regex(r'target: [\'"](.+?)[\'"]', params, 'target'),
})
else: else:
description = None params = extract_attributes(self._search_regex(
params = self._search_regex(r'var params = {(.+?)}[;,]', webpage, r'(<[^>]+data-js="video-player"[^>]+>)',
'player params', flags=re.DOTALL) webpage, 'player params element'))
video_id = self._search_regex(r'videoId: [\'"](.+?)[\'"]', params, 'video id') query.update({
player_id = self._search_regex(r'playerId: [\'"](.+?)[\'"]', params, 'player id') 'videoId': params['data-video'],
target = self._search_regex(r'target: [\'"](.+?)[\'"]', params, 'target') 'playerId': params['data-player'],
data = compat_urllib_parse_urlencode({'videoId': video_id, 'target': params['id'],
'playerId': player_id, })
'target': target, video_id = query['videoId']
}) video_info = None
base_info_url = self._search_regex(r'url = [\'"](.+?)[\'"][,;]', info_page = self._download_webpage(
webpage, 'base info url', 'http://player.cnevids.com/player/video.js',
default='http://player.cnevids.com/player/loader.js?') video_id, 'Downloading video info', query=query, fatal=False)
info_url = base_info_url + data if info_page:
info_page = self._download_webpage(info_url, video_id, video_info = self._parse_json(self._search_regex(
'Downloading video info') r'loadCallback\(({.+})\)', info_page, 'video info'), video_id)['video']
video_info = self._search_regex(r'var\s+video\s*=\s*({.+?});', info_page, 'video info') else:
video_info = self._parse_json(video_info, video_id) info_page = self._download_webpage(
'http://player.cnevids.com/player/loader.js',
video_id, 'Downloading loader info', query=query)
video_info = self._parse_json(self._search_regex(
r'var\s+video\s*=\s*({.+?});', info_page, 'video info'), video_id)
title = video_info['title']
formats = [{ formats = []
'format_id': '%s-%s' % (fdata['type'].split('/')[-1], fdata['quality']), for fdata in video_info.get('sources', [{}])[0]:
'url': fdata['src'], src = fdata.get('src')
'ext': fdata['type'].split('/')[-1], if not src:
'quality': 1 if fdata['quality'] == 'high' else 0, continue
} for fdata in video_info['sources'][0]] ext = mimetype2ext(fdata.get('type')) or determine_ext(src)
quality = fdata.get('quality')
formats.append({
'format_id': ext + ('-%s' % quality if quality else ''),
'url': src,
'ext': ext,
'quality': 1 if quality == 'high' else 0,
})
self._sort_formats(formats) self._sort_formats(formats)
return { info = self._search_json_ld(
webpage, video_id, fatal=False) if url_type != 'embed' else {}
info.update({
'id': video_id, 'id': video_id,
'formats': formats, 'formats': formats,
'title': video_info['title'], 'title': title,
'thumbnail': video_info['poster_frame'], 'thumbnail': video_info.get('poster_frame'),
'description': description, 'uploader': video_info.get('brand'),
} 'duration': int_or_none(video_info.get('duration')),
'tags': video_info.get('tags'),
'series': video_info.get('series_title'),
'season': video_info.get('season_title'),
'timestamp': parse_iso8601(video_info.get('premiere_date')),
})
return info
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) site, url_type, item_id = re.match(self._VALID_URL, url).groups()
site = mobj.group('site')
url_type = mobj.group('type')
item_id = mobj.group('id')
# Convert JS embed to regular embed # Convert JS embed to regular embed
if url_type == 'embedjs': if url_type == 'embedjs':

View File

@@ -114,6 +114,21 @@ class CrunchyrollIE(CrunchyrollBaseIE):
# rtmp # rtmp
'skip_download': True, 'skip_download': True,
}, },
}, {
'url': 'http://www.crunchyroll.com/rezero-starting-life-in-another-world-/episode-5-the-morning-of-our-promise-is-still-distant-702409',
'info_dict': {
'id': '702409',
'ext': 'mp4',
'title': 'Re:ZERO -Starting Life in Another World- Episode 5 The Morning of Our Promise Is Still Distant',
'description': 'md5:97664de1ab24bbf77a9c01918cb7dca9',
'thumbnail': 're:^https?://.*\.jpg$',
'uploader': 'TV TOKYO',
'upload_date': '20160508',
},
'params': {
# m3u8 download
'skip_download': True,
},
}, { }, {
'url': 'http://www.crunchyroll.fr/girl-friend-beta/episode-11-goodbye-la-mode-661697', 'url': 'http://www.crunchyroll.fr/girl-friend-beta/episode-11-goodbye-la-mode-661697',
'only_matching': True, 'only_matching': True,
@@ -336,9 +351,18 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
if video_encode_id in video_encode_ids: if video_encode_id in video_encode_ids:
continue continue
video_encode_ids.append(video_encode_id) video_encode_ids.append(video_encode_id)
video_file = xpath_text(stream_info, './file')
if not video_file:
continue
if video_file.startswith('http'):
formats.extend(self._extract_m3u8_formats(
video_file, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
continue
video_url = xpath_text(stream_info, './host') video_url = xpath_text(stream_info, './host')
video_play_path = xpath_text(stream_info, './file') if not video_url:
if not video_url or not video_play_path:
continue continue
metadata = stream_info.find('./metadata') metadata = stream_info.find('./metadata')
format_info = { format_info = {
@@ -353,7 +377,7 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
parsed_video_url = compat_urlparse.urlparse(video_url) parsed_video_url = compat_urlparse.urlparse(video_url)
direct_video_url = compat_urlparse.urlunparse(parsed_video_url._replace( direct_video_url = compat_urlparse.urlunparse(parsed_video_url._replace(
netloc='v.lvlt.crcdn.net', netloc='v.lvlt.crcdn.net',
path='%s/%s' % (remove_end(parsed_video_url.path, '/'), video_play_path.split(':')[-1]))) path='%s/%s' % (remove_end(parsed_video_url.path, '/'), video_file.split(':')[-1])))
if self._is_valid_url(direct_video_url, video_id, video_format): if self._is_valid_url(direct_video_url, video_id, video_format):
format_info.update({ format_info.update({
'url': direct_video_url, 'url': direct_video_url,
@@ -363,7 +387,7 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
format_info.update({ format_info.update({
'url': video_url, 'url': video_url,
'play_path': video_play_path, 'play_path': video_file,
'ext': 'flv', 'ext': 'flv',
}) })
formats.append(format_info) formats.append(format_info)

View File

@@ -1,13 +1,12 @@
# -*- coding: utf-8 -*- # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import parse_iso8601, ExtractorError from ..utils import unified_timestamp
class CtsNewsIE(InfoExtractor): class CtsNewsIE(InfoExtractor):
IE_DESC = '華視新聞' IE_DESC = '華視新聞'
# https connection failed (Connection reset)
_VALID_URL = r'https?://news\.cts\.com\.tw/[a-z]+/[a-z]+/\d+/(?P<id>\d+)\.html' _VALID_URL = r'https?://news\.cts\.com\.tw/[a-z]+/[a-z]+/\d+/(?P<id>\d+)\.html'
_TESTS = [{ _TESTS = [{
'url': 'http://news.cts.com.tw/cts/international/201501/201501291578109.html', 'url': 'http://news.cts.com.tw/cts/international/201501/201501291578109.html',
@@ -16,7 +15,7 @@ class CtsNewsIE(InfoExtractor):
'id': '201501291578109', 'id': '201501291578109',
'ext': 'mp4', 'ext': 'mp4',
'title': '以色列.真主黨交火 3人死亡', 'title': '以色列.真主黨交火 3人死亡',
'description': 'md5:95e9b295c898b7ff294f09d450178d7d', 'description': '以色列和黎巴嫩真主黨,爆發五年最嚴重衝突,雙方砲轟交火,兩名以軍死亡,還有一名西班牙籍的聯合國維和人...',
'timestamp': 1422528540, 'timestamp': 1422528540,
'upload_date': '20150129', 'upload_date': '20150129',
} }
@@ -28,7 +27,7 @@ class CtsNewsIE(InfoExtractor):
'id': '201309031304098', 'id': '201309031304098',
'ext': 'mp4', 'ext': 'mp4',
'title': '韓國31歲童顏男 貌如十多歲小孩', 'title': '韓國31歲童顏男 貌如十多歲小孩',
'description': 'md5:f183feeba3752b683827aab71adad584', 'description': '越有年紀的人越希望看起來年輕一點而南韓卻有一位31歲的男子看起來像是11、12歲的小孩身...',
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
'timestamp': 1378205880, 'timestamp': 1378205880,
'upload_date': '20130903', 'upload_date': '20130903',
@@ -36,8 +35,7 @@ class CtsNewsIE(InfoExtractor):
}, { }, {
# With Youtube embedded video # With Youtube embedded video
'url': 'http://news.cts.com.tw/cts/money/201501/201501291578003.html', 'url': 'http://news.cts.com.tw/cts/money/201501/201501291578003.html',
'md5': '1d842c771dc94c8c3bca5af2cc1db9c5', 'md5': 'e4726b2ccd70ba2c319865e28f0a91d1',
'add_ie': ['Youtube'],
'info_dict': { 'info_dict': {
'id': 'OVbfO7d0_hQ', 'id': 'OVbfO7d0_hQ',
'ext': 'mp4', 'ext': 'mp4',
@@ -47,42 +45,37 @@ class CtsNewsIE(InfoExtractor):
'upload_date': '20150128', 'upload_date': '20150128',
'uploader_id': 'TBSCTS', 'uploader_id': 'TBSCTS',
'uploader': '中華電視公司', 'uploader': '中華電視公司',
} },
'add_ie': ['Youtube'],
}] }]
def _real_extract(self, url): def _real_extract(self, url):
news_id = self._match_id(url) news_id = self._match_id(url)
page = self._download_webpage(url, news_id) page = self._download_webpage(url, news_id)
if self._search_regex(r'(CTSPlayer2)', page, 'CTSPlayer2 identifier', default=None): news_id = self._hidden_inputs(page).get('get_id')
feed_url = self._html_search_regex(
r'(http://news\.cts\.com\.tw/action/mp4feed\.php\?news_id=\d+)', if news_id:
page, 'feed url') mp4_feed = self._download_json(
video_url = self._download_webpage( 'http://news.cts.com.tw/action/test_mp4feed.php',
feed_url, news_id, note='Fetching feed') news_id, note='Fetching feed', query={'news_id': news_id})
video_url = mp4_feed['source_url']
else: else:
self.to_screen('Not CTSPlayer video, trying Youtube...') self.to_screen('Not CTSPlayer video, trying Youtube...')
youtube_url = self._search_regex( youtube_url = self._search_regex(
r'src="(//www\.youtube\.com/embed/[^"]+)"', page, 'youtube url', r'src="(//www\.youtube\.com/embed/[^"]+)"', page, 'youtube url')
default=None)
if not youtube_url:
raise ExtractorError('The news includes no videos!', expected=True)
return { return self.url_result(youtube_url, ie='Youtube')
'_type': 'url',
'url': youtube_url,
'ie_key': 'Youtube',
}
description = self._html_search_meta('description', page) description = self._html_search_meta('description', page)
title = self._html_search_meta('title', page) title = self._html_search_meta('title', page, fatal=True)
thumbnail = self._html_search_meta('image', page) thumbnail = self._html_search_meta('image', page)
datetime_str = self._html_search_regex( datetime_str = self._html_search_regex(
r'(\d{4}/\d{2}/\d{2} \d{2}:\d{2})', page, 'date and time') r'(\d{4}/\d{2}/\d{2} \d{2}:\d{2})', page, 'date and time', fatal=False)
# Transform into ISO 8601 format with timezone info timestamp = None
datetime_str = datetime_str.replace('/', '-') + ':00+0800' if datetime_str:
timestamp = parse_iso8601(datetime_str, delimiter=' ') timestamp = unified_timestamp(datetime_str) - 8 * 3600
return { return {
'id': news_id, 'id': news_id,

View File

@@ -9,7 +9,7 @@ from ..utils import (
class CWTVIE(InfoExtractor): class CWTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?cw(?:tv|seed)\.com/(?:shows/)?(?:[^/]+/){2}\?.*\bplay=(?P<id>[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12})' _VALID_URL = r'https?://(?:www\.)?cw(?:tv(?:pr)?|seed)\.com/(?:shows/)?(?:[^/]+/)+[^?]*\?.*\b(?:play|watch)=(?P<id>[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12})'
_TESTS = [{ _TESTS = [{
'url': 'http://cwtv.com/shows/arrow/legends-of-yesterday/?play=6b15e985-9345-4f60-baf8-56e96be57c63', 'url': 'http://cwtv.com/shows/arrow/legends-of-yesterday/?play=6b15e985-9345-4f60-baf8-56e96be57c63',
'info_dict': { 'info_dict': {
@@ -28,7 +28,8 @@ class CWTVIE(InfoExtractor):
'params': { 'params': {
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
} },
'skip': 'redirect to http://cwtv.com/shows/arrow/',
}, { }, {
'url': 'http://www.cwseed.com/shows/whose-line-is-it-anyway/jeff-davis-4/?play=24282b12-ead2-42f2-95ad-26770c2c6088', 'url': 'http://www.cwseed.com/shows/whose-line-is-it-anyway/jeff-davis-4/?play=24282b12-ead2-42f2-95ad-26770c2c6088',
'info_dict': { 'info_dict': {
@@ -44,22 +45,43 @@ class CWTVIE(InfoExtractor):
'upload_date': '20151006', 'upload_date': '20151006',
'timestamp': 1444107300, 'timestamp': 1444107300,
}, },
'params': {
# m3u8 download
'skip_download': True,
}
}, { }, {
'url': 'http://cwtv.com/thecw/chroniclesofcisco/?play=8adebe35-f447-465f-ab52-e863506ff6d6', 'url': 'http://cwtv.com/thecw/chroniclesofcisco/?play=8adebe35-f447-465f-ab52-e863506ff6d6',
'only_matching': True, 'only_matching': True,
}, {
'url': 'http://cwtvpr.com/the-cw/video?watch=9eee3f60-ef4e-440b-b3b2-49428ac9c54e',
'only_matching': True,
}, {
'url': 'http://cwtv.com/shows/arrow/legends-of-yesterday/?watch=6b15e985-9345-4f60-baf8-56e96be57c63',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
video_data = self._download_json( video_data = None
'http://metaframe.digitalsmiths.tv/v2/CWtv/assets/%s/partner/132?format=json' % video_id, video_id) formats = []
for partner in (154, 213):
formats = self._extract_m3u8_formats( vdata = self._download_json(
video_data['videos']['variantplaylist']['uri'], video_id, 'mp4') 'http://metaframe.digitalsmiths.tv/v2/CWtv/assets/%s/partner/%d?format=json' % (video_id, partner), video_id, fatal=False)
if not vdata:
continue
video_data = vdata
for quality, quality_data in vdata.get('videos', {}).items():
quality_url = quality_data.get('uri')
if not quality_url:
continue
if quality == 'variantplaylist':
formats.extend(self._extract_m3u8_formats(
quality_url, video_id, 'mp4', m3u8_id='hls', fatal=False))
else:
tbr = int_or_none(quality_data.get('bitrate'))
format_id = 'http' + ('-%d' % tbr if tbr else '')
if self._is_valid_url(quality_url, video_id, format_id):
formats.append({
'format_id': format_id,
'url': quality_url,
'tbr': tbr,
})
self._sort_formats(formats) self._sort_formats(formats)
thumbnails = [{ thumbnails = [{

View File

@@ -331,7 +331,9 @@ class DailymotionPlaylistIE(DailymotionBaseInfoExtractor):
for video_id in re.findall(r'data-xid="(.+?)"', webpage): for video_id in re.findall(r'data-xid="(.+?)"', webpage):
if video_id not in video_ids: if video_id not in video_ids:
yield self.url_result('http://www.dailymotion.com/video/%s' % video_id, 'Dailymotion') yield self.url_result(
'http://www.dailymotion.com/video/%s' % video_id,
DailymotionIE.ie_key(), video_id)
video_ids.add(video_id) video_ids.add(video_id)
if re.search(self._MORE_PAGES_INDICATOR, webpage) is None: if re.search(self._MORE_PAGES_INDICATOR, webpage) is None:

View File

@@ -38,6 +38,12 @@ class DBTVIE(InfoExtractor):
'only_matching': True, 'only_matching': True,
}] }]
@staticmethod
def _extract_urls(webpage):
return [url for _, url in re.findall(
r'<iframe[^>]+src=(["\'])((?:https?:)?//(?:www\.)?dbtv\.no/(?:lazy)?player/\d+.*?)\1',
webpage)]
def _real_extract(self, url): def _real_extract(self, url):
video_id, display_id = re.match(self._VALID_URL, url).groups() video_id, display_id = re.match(self._VALID_URL, url).groups()

View File

@@ -0,0 +1,108 @@
from __future__ import unicode_literals
from .common import InfoExtractor
from ..compat import compat_str
from ..utils import (
extract_attributes,
int_or_none,
parse_age_limit,
unescapeHTML,
)
class DiscoveryGoIE(InfoExtractor):
_VALID_URL = r'''(?x)https?://(?:www\.)?(?:
discovery|
investigationdiscovery|
discoverylife|
animalplanet|
ahctv|
destinationamerica|
sciencechannel|
tlc|
velocitychannel
)go\.com/(?:[^/]+/)*(?P<id>[^/?#&]+)'''
_TEST = {
'url': 'https://www.discoverygo.com/love-at-first-kiss/kiss-first-ask-questions-later/',
'info_dict': {
'id': '57a33c536b66d1cd0345eeb1',
'ext': 'mp4',
'title': 'Kiss First, Ask Questions Later!',
'description': 'md5:fe923ba34050eae468bffae10831cb22',
'duration': 2579,
'series': 'Love at First Kiss',
'season_number': 1,
'episode_number': 1,
'age_limit': 14,
},
}
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
container = extract_attributes(
self._search_regex(
r'(<div[^>]+class=["\']video-player-container[^>]+>)',
webpage, 'video container'))
video = self._parse_json(
unescapeHTML(container.get('data-video') or container.get('data-json')),
display_id)
title = video['name']
stream = video['stream']
STREAM_URL_SUFFIX = 'streamUrl'
formats = []
for stream_kind in ('', 'hds'):
suffix = STREAM_URL_SUFFIX.capitalize() if stream_kind else STREAM_URL_SUFFIX
stream_url = stream.get('%s%s' % (stream_kind, suffix))
if not stream_url:
continue
if stream_kind == '':
formats.extend(self._extract_m3u8_formats(
stream_url, display_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
elif stream_kind == 'hds':
formats.extend(self._extract_f4m_formats(
stream_url, display_id, f4m_id=stream_kind, fatal=False))
self._sort_formats(formats)
video_id = video.get('id') or display_id
description = video.get('description', {}).get('detailed')
duration = int_or_none(video.get('duration'))
series = video.get('show', {}).get('name')
season_number = int_or_none(video.get('season', {}).get('number'))
episode_number = int_or_none(video.get('episodeNumber'))
tags = video.get('tags')
age_limit = parse_age_limit(video.get('parental', {}).get('rating'))
subtitles = {}
captions = stream.get('captions')
if isinstance(captions, list):
for caption in captions:
subtitle_url = caption.get('fileUrl')
if (not subtitle_url or not isinstance(subtitle_url, compat_str) or
not subtitle_url.startswith('http')):
continue
lang = caption.get('fileLang', 'en')
subtitles.setdefault(lang, []).append({'url': subtitle_url})
return {
'id': video_id,
'display_id': display_id,
'title': title,
'description': description,
'duration': duration,
'series': series,
'season_number': season_number,
'episode_number': episode_number,
'tags': tags,
'age_limit': age_limit,
'formats': formats,
'subtitles': subtitles,
}

View File

@@ -3,7 +3,10 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import str_to_int from ..utils import (
NO_DEFAULT,
str_to_int,
)
class DrTuberIE(InfoExtractor): class DrTuberIE(InfoExtractor):
@@ -17,7 +20,6 @@ class DrTuberIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'hot perky blonde naked golf', 'title': 'hot perky blonde naked golf',
'like_count': int, 'like_count': int,
'dislike_count': int,
'comment_count': int, 'comment_count': int,
'categories': ['Babe', 'Blonde', 'Erotic', 'Outdoor', 'Softcore', 'Solo'], 'categories': ['Babe', 'Blonde', 'Erotic', 'Outdoor', 'Softcore', 'Solo'],
'thumbnail': 're:https?://.*\.jpg$', 'thumbnail': 're:https?://.*\.jpg$',
@@ -36,25 +38,29 @@ class DrTuberIE(InfoExtractor):
r'<source src="([^"]+)"', webpage, 'video URL') r'<source src="([^"]+)"', webpage, 'video URL')
title = self._html_search_regex( title = self._html_search_regex(
[r'<p[^>]+class="title_substrate">([^<]+)</p>', r'<title>([^<]+) - \d+'], (r'class="title_watch"[^>]*><p>([^<]+)<',
r'<p[^>]+class="title_substrate">([^<]+)</p>',
r'<title>([^<]+) - \d+'),
webpage, 'title') webpage, 'title')
thumbnail = self._html_search_regex( thumbnail = self._html_search_regex(
r'poster="([^"]+)"', r'poster="([^"]+)"',
webpage, 'thumbnail', fatal=False) webpage, 'thumbnail', fatal=False)
def extract_count(id_, name): def extract_count(id_, name, default=NO_DEFAULT):
return str_to_int(self._html_search_regex( return str_to_int(self._html_search_regex(
r'<span[^>]+(?:class|id)="%s"[^>]*>([\d,\.]+)</span>' % id_, r'<span[^>]+(?:class|id)="%s"[^>]*>([\d,\.]+)</span>' % id_,
webpage, '%s count' % name, fatal=False)) webpage, '%s count' % name, default=default, fatal=False))
like_count = extract_count('rate_likes', 'like') like_count = extract_count('rate_likes', 'like')
dislike_count = extract_count('rate_dislikes', 'dislike') dislike_count = extract_count('rate_dislikes', 'dislike', default=None)
comment_count = extract_count('comments_count', 'comment') comment_count = extract_count('comments_count', 'comment')
cats_str = self._search_regex( cats_str = self._search_regex(
r'<div[^>]+class="categories_list">(.+?)</div>', webpage, 'categories', fatal=False) r'<div[^>]+class="categories_list">(.+?)</div>',
categories = [] if not cats_str else re.findall(r'<a title="([^"]+)"', cats_str) webpage, 'categories', fatal=False)
categories = [] if not cats_str else re.findall(
r'<a title="([^"]+)"', cats_str)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -4,9 +4,10 @@ from .common import InfoExtractor
class EngadgetIE(InfoExtractor): class EngadgetIE(InfoExtractor):
_VALID_URL = r'https?://www.engadget.com/video/(?P<id>\d+)' _VALID_URL = r'https?://www.engadget.com/video/(?P<id>[^/?#]+)'
_TEST = { _TESTS = [{
# video with 5min ID
'url': 'http://www.engadget.com/video/518153925/', 'url': 'http://www.engadget.com/video/518153925/',
'md5': 'c6820d4828a5064447a4d9fc73f312c9', 'md5': 'c6820d4828a5064447a4d9fc73f312c9',
'info_dict': { 'info_dict': {
@@ -15,8 +16,12 @@ class EngadgetIE(InfoExtractor):
'title': 'Samsung Galaxy Tab Pro 8.4 Review', 'title': 'Samsung Galaxy Tab Pro 8.4 Review',
}, },
'add_ie': ['FiveMin'], 'add_ie': ['FiveMin'],
} }, {
# video with vidible ID
'url': 'https://www.engadget.com/video/57a28462134aa15a39f0421a/',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
return self.url_result('5min:%s' % video_id) return self.url_result('aol-video:%s' % video_id)

View File

@@ -1,7 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
@@ -12,23 +10,22 @@ from ..utils import (
class ExpoTVIE(InfoExtractor): class ExpoTVIE(InfoExtractor):
_VALID_URL = r'https?://www\.expotv\.com/videos/[^?#]*/(?P<id>[0-9]+)($|[?#])' _VALID_URL = r'https?://www\.expotv\.com/videos/[^?#]*/(?P<id>[0-9]+)($|[?#])'
_TEST = { _TEST = {
'url': 'http://www.expotv.com/videos/reviews/1/24/LinneCardscom/17561', 'url': 'http://www.expotv.com/videos/reviews/3/40/NYX-Butter-lipstick/667916',
'md5': '2985e6d7a392b2f7a05e0ca350fe41d0', 'md5': 'fe1d728c3a813ff78f595bc8b7a707a8',
'info_dict': { 'info_dict': {
'id': '17561', 'id': '667916',
'ext': 'mp4', 'ext': 'mp4',
'upload_date': '20060212', 'title': 'NYX Butter Lipstick Little Susie',
'title': 'My Favorite Online Scrapbook Store', 'description': 'Goes on like butter, but looks better!',
'view_count': int,
'description': 'You\'ll find most everything you need at this virtual store front.',
'uploader': 'Anna T.',
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
'uploader': 'Stephanie S.',
'upload_date': '20150520',
'view_count': int,
} }
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
player_key = self._search_regex( player_key = self._search_regex(
@@ -66,7 +63,7 @@ class ExpoTVIE(InfoExtractor):
fatal=False) fatal=False)
upload_date = unified_strdate(self._search_regex( upload_date = unified_strdate(self._search_regex(
r'<h5>Reviewed on ([0-9/.]+)</h5>', webpage, 'upload date', r'<h5>Reviewed on ([0-9/.]+)</h5>', webpage, 'upload date',
fatal=False)) fatal=False), day_first=False)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -29,6 +29,7 @@ from .aftonbladet import AftonbladetIE
from .airmozilla import AirMozillaIE from .airmozilla import AirMozillaIE
from .aljazeera import AlJazeeraIE from .aljazeera import AlJazeeraIE
from .alphaporno import AlphaPornoIE from .alphaporno import AlphaPornoIE
from .amcnetworks import AMCNetworksIE
from .animeondemand import AnimeOnDemandIE from .animeondemand import AnimeOnDemandIE
from .anitube import AnitubeIE from .anitube import AnitubeIE
from .anysex import AnySexIE from .anysex import AnySexIE
@@ -159,8 +160,9 @@ from .coub import CoubIE
from .collegerama import CollegeRamaIE from .collegerama import CollegeRamaIE
from .comedycentral import ( from .comedycentral import (
ComedyCentralIE, ComedyCentralIE,
ComedyCentralShowsIE, ComedyCentralShortnameIE,
ComedyCentralTVIE, ComedyCentralTVIE,
ToshIE,
) )
from .comcarcoff import ComCarCoffIE from .comcarcoff import ComCarCoffIE
from .commonmistakes import CommonMistakesIE, UnicodeBOMIE from .commonmistakes import CommonMistakesIE, UnicodeBOMIE
@@ -220,6 +222,7 @@ from .dvtv import DVTVIE
from .dumpert import DumpertIE from .dumpert import DumpertIE
from .defense import DefenseGouvFrIE from .defense import DefenseGouvFrIE
from .discovery import DiscoveryIE from .discovery import DiscoveryIE
from .discoverygo import DiscoveryGoIE
from .dispeak import DigitallySpeakingIE from .dispeak import DigitallySpeakingIE
from .dropbox import DropboxIE from .dropbox import DropboxIE
from .dw import ( from .dw import (
@@ -270,10 +273,7 @@ from .fox import FOXIE
from .foxgay import FoxgayIE from .foxgay import FoxgayIE
from .foxnews import FoxNewsIE from .foxnews import FoxNewsIE
from .foxsports import FoxSportsIE from .foxsports import FoxSportsIE
from .franceculture import ( from .franceculture import FranceCultureIE
FranceCultureIE,
FranceCultureEmissionIE,
)
from .franceinter import FranceInterIE from .franceinter import FranceInterIE
from .francetv import ( from .francetv import (
PluzzIE, PluzzIE,
@@ -288,8 +288,8 @@ from .freevideo import FreeVideoIE
from .funimation import FunimationIE from .funimation import FunimationIE
from .funnyordie import FunnyOrDieIE from .funnyordie import FunnyOrDieIE
from .fusion import FusionIE from .fusion import FusionIE
from .fxnetworks import FXNetworksIE
from .gameinformer import GameInformerIE from .gameinformer import GameInformerIE
from .gamekings import GamekingsIE
from .gameone import ( from .gameone import (
GameOneIE, GameOneIE,
GameOnePlaylistIE, GameOnePlaylistIE,
@@ -310,7 +310,6 @@ from .globo import (
) )
from .godtube import GodTubeIE from .godtube import GodTubeIE
from .godtv import GodTVIE from .godtv import GodTVIE
from .goldenmoustache import GoldenMoustacheIE
from .golem import GolemIE from .golem import GolemIE
from .googledrive import GoogleDriveIE from .googledrive import GoogleDriveIE
from .googleplus import GooglePlusIE from .googleplus import GooglePlusIE
@@ -325,6 +324,10 @@ from .heise import HeiseIE
from .hellporno import HellPornoIE from .hellporno import HellPornoIE
from .helsinki import HelsinkiIE from .helsinki import HelsinkiIE
from .hentaistigma import HentaiStigmaIE from .hentaistigma import HentaiStigmaIE
from .hgtv import (
HGTVIE,
HGTVComShowIE,
)
from .historicfilms import HistoricFilmsIE from .historicfilms import HistoricFilmsIE
from .hitbox import HitboxIE, HitboxLiveIE from .hitbox import HitboxIE, HitboxLiveIE
from .hornbunny import HornBunnyIE from .hornbunny import HornBunnyIE
@@ -480,7 +483,6 @@ from .msn import MSNIE
from .mtv import ( from .mtv import (
MTVIE, MTVIE,
MTVServicesEmbeddedIE, MTVServicesEmbeddedIE,
MTVIggyIE,
MTVDEIE, MTVDEIE,
) )
from .muenchentv import MuenchenTVIE from .muenchentv import MuenchenTVIE
@@ -492,8 +494,9 @@ from .myvi import MyviIE
from .myvideo import MyVideoIE from .myvideo import MyVideoIE
from .myvidster import MyVidsterIE from .myvidster import MyVidsterIE
from .nationalgeographic import ( from .nationalgeographic import (
NationalGeographicVideoIE,
NationalGeographicIE, NationalGeographicIE,
NationalGeographicChannelIE, NationalGeographicEpisodeGuideIE,
) )
from .naver import NaverIE from .naver import NaverIE
from .nba import NBAIE from .nba import NBAIE
@@ -530,7 +533,6 @@ from .nextmedia import (
NextMediaActionNewsIE, NextMediaActionNewsIE,
AppleDailyIE, AppleDailyIE,
) )
from .nextmovie import NextMovieIE
from .nfb import NFBIE from .nfb import NFBIE
from .nfl import NFLIE from .nfl import NFLIE
from .nhl import ( from .nhl import (
@@ -637,8 +639,10 @@ from .pluralsight import (
PluralsightCourseIE, PluralsightCourseIE,
) )
from .podomatic import PodomaticIE from .podomatic import PodomaticIE
from .pokemon import PokemonIE
from .polskieradio import PolskieRadioIE from .polskieradio import PolskieRadioIE
from .porn91 import Porn91IE from .porn91 import Porn91IE
from .porncom import PornComIE
from .pornhd import PornHdIE from .pornhd import PornHdIE
from .pornhub import ( from .pornhub import (
PornHubIE, PornHubIE,
@@ -695,6 +699,7 @@ from .rockstargames import RockstarGamesIE
from .roosterteeth import RoosterTeethIE from .roosterteeth import RoosterTeethIE
from .rottentomatoes import RottenTomatoesIE from .rottentomatoes import RottenTomatoesIE
from .roxwel import RoxwelIE from .roxwel import RoxwelIE
from .rozhlas import RozhlasIE
from .rtbf import RTBFIE from .rtbf import RTBFIE
from .rte import RteIE, RteRadioIE from .rte import RteIE, RteRadioIE
from .rtlnl import RtlNlIE from .rtlnl import RtlNlIE
@@ -754,6 +759,7 @@ from .smotri import (
) )
from .snotr import SnotrIE from .snotr import SnotrIE
from .sohu import SohuIE from .sohu import SohuIE
from .sonyliv import SonyLIVIE
from .soundcloud import ( from .soundcloud import (
SoundcloudIE, SoundcloudIE,
SoundcloudSetIE, SoundcloudSetIE,
@@ -809,7 +815,6 @@ from .tagesschau import (
TagesschauPlayerIE, TagesschauPlayerIE,
TagesschauIE, TagesschauIE,
) )
from .tapely import TapelyIE
from .tass import TassIE from .tass import TassIE
from .tdslifeway import TDSLifewayIE from .tdslifeway import TDSLifewayIE
from .teachertube import ( from .teachertube import (
@@ -893,10 +898,14 @@ from .tvc import (
from .tvigle import TvigleIE from .tvigle import TvigleIE
from .tvland import TVLandIE from .tvland import TVLandIE
from .tvp import ( from .tvp import (
TVPEmbedIE,
TVPIE, TVPIE,
TVPSeriesIE, TVPSeriesIE,
) )
from .tvplay import TVPlayIE from .tvplay import (
TVPlayIE,
ViafreeIE,
)
from .tweakers import TweakersIE from .tweakers import TweakersIE
from .twentyfourvideo import TwentyFourVideoIE from .twentyfourvideo import TwentyFourVideoIE
from .twentymin import TwentyMinutenIE from .twentymin import TwentyMinutenIE
@@ -925,6 +934,11 @@ from .udemy import (
from .udn import UDNEmbedIE from .udn import UDNEmbedIE
from .digiteka import DigitekaIE from .digiteka import DigitekaIE
from .unistra import UnistraIE from .unistra import UnistraIE
from .uol import UOLIE
from .uplynk import (
UplynkIE,
UplynkPreplayIE,
)
from .urort import UrortIE from .urort import UrortIE
from .urplay import URPlayIE from .urplay import URPlayIE
from .usatoday import USATodayIE from .usatoday import USATodayIE
@@ -953,6 +967,7 @@ from .vice import (
ViceIE, ViceIE,
ViceShowIE, ViceShowIE,
) )
from .viceland import VicelandIE
from .vidbit import VidbitIE from .vidbit import VidbitIE
from .viddler import ViddlerIE from .viddler import ViddlerIE
from .videodetective import VideoDetectiveIE from .videodetective import VideoDetectiveIE
@@ -1006,6 +1021,7 @@ from .vk import (
) )
from .vlive import VLiveIE from .vlive import VLiveIE
from .vodlocker import VodlockerIE from .vodlocker import VodlockerIE
from .vodplatform import VODPlatformIE
from .voicerepublic import VoiceRepublicIE from .voicerepublic import VoiceRepublicIE
from .voxmedia import VoxMediaIE from .voxmedia import VoxMediaIE
from .vporn import VpornIE from .vporn import VpornIE
@@ -1102,4 +1118,3 @@ from .zingmp3 import (
ZingMp3SongIE, ZingMp3SongIE,
ZingMp3AlbumIE, ZingMp3AlbumIE,
) )
from .zippcast import ZippCastIE

View File

@@ -1,20 +1,14 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re from ..utils import str_to_int
from .keezmovies import KeezMoviesIE
from .common import InfoExtractor
from ..utils import (
int_or_none,
sanitized_Request,
str_to_int,
)
class ExtremeTubeIE(InfoExtractor): class ExtremeTubeIE(KeezMoviesIE):
_VALID_URL = r'https?://(?:www\.)?extremetube\.com/(?:[^/]+/)?video/(?P<id>[^/#?&]+)' _VALID_URL = r'https?://(?:www\.)?extremetube\.com/(?:[^/]+/)?video/(?P<id>[^/#?&]+)'
_TESTS = [{ _TESTS = [{
'url': 'http://www.extremetube.com/video/music-video-14-british-euro-brit-european-cumshots-swallow-652431', 'url': 'http://www.extremetube.com/video/music-video-14-british-euro-brit-european-cumshots-swallow-652431',
'md5': '344d0c6d50e2f16b06e49ca011d8ac69', 'md5': '1fb9228f5e3332ec8c057d6ac36f33e0',
'info_dict': { 'info_dict': {
'id': 'music-video-14-british-euro-brit-european-cumshots-swallow-652431', 'id': 'music-video-14-british-euro-brit-european-cumshots-swallow-652431',
'ext': 'mp4', 'ext': 'mp4',
@@ -35,58 +29,22 @@ class ExtremeTubeIE(InfoExtractor):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) webpage, info = self._extract_info(url)
req = sanitized_Request(url) if not info['title']:
req.add_header('Cookie', 'age_verified=1') info['title'] = self._search_regex(
webpage = self._download_webpage(req, video_id) r'<h1[^>]+title="([^"]+)"[^>]*>', webpage, 'title')
video_title = self._html_search_regex(
r'<h1 [^>]*?title="([^"]+)"[^>]*>', webpage, 'title')
uploader = self._html_search_regex( uploader = self._html_search_regex(
r'Uploaded by:\s*</strong>\s*(.+?)\s*</div>', r'Uploaded by:\s*</strong>\s*(.+?)\s*</div>',
webpage, 'uploader', fatal=False) webpage, 'uploader', fatal=False)
view_count = str_to_int(self._html_search_regex( view_count = str_to_int(self._search_regex(
r'Views:\s*</strong>\s*<span>([\d,\.]+)</span>', r'Views:\s*</strong>\s*<span>([\d,\.]+)</span>',
webpage, 'view count', fatal=False)) webpage, 'view count', fatal=False))
flash_vars = self._parse_json( info.update({
self._search_regex(
r'var\s+flashvars\s*=\s*({.+?});', webpage, 'flash vars'),
video_id)
formats = []
for quality_key, video_url in flash_vars.items():
height = int_or_none(self._search_regex(
r'quality_(\d+)[pP]$', quality_key, 'height', default=None))
if not height:
continue
f = {
'url': video_url,
}
mobj = re.search(
r'/(?P<height>\d{3,4})[pP]_(?P<bitrate>\d+)[kK]_\d+', video_url)
if mobj:
height = int(mobj.group('height'))
bitrate = int(mobj.group('bitrate'))
f.update({
'format_id': '%dp-%dk' % (height, bitrate),
'height': height,
'tbr': bitrate,
})
else:
f.update({
'format_id': '%dp' % height,
'height': height,
})
formats.append(f)
self._sort_formats(formats)
return {
'id': video_id,
'title': video_title,
'formats': formats,
'uploader': uploader, 'uploader': uploader,
'view_count': view_count, 'view_count': view_count,
'age_limit': 18, })
}
return info

View File

@@ -1,24 +1,11 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import (
compat_parse_qs,
compat_urllib_parse_urlencode,
compat_urllib_parse_urlparse,
compat_urlparse,
)
from ..utils import (
ExtractorError,
parse_duration,
replace_extension,
)
class FiveMinIE(InfoExtractor): class FiveMinIE(InfoExtractor):
IE_NAME = '5min' IE_NAME = '5min'
_VALID_URL = r'(?:5min:(?P<id>\d+)(?::(?P<sid>\d+))?|https?://[^/]*?5min\.com/Scripts/PlayerSeed\.js\?(?P<query>.*))' _VALID_URL = r'(?:5min:|https?://(?:[^/]*?5min\.com/|delivery\.vidible\.tv/aol)(?:(?:Scripts/PlayerSeed\.js|playerseed/?)?\?.*?playList=)?)(?P<id>\d+)'
_TESTS = [ _TESTS = [
{ {
@@ -29,8 +16,16 @@ class FiveMinIE(InfoExtractor):
'id': '518013791', 'id': '518013791',
'ext': 'mp4', 'ext': 'mp4',
'title': 'iPad Mini with Retina Display Review', 'title': 'iPad Mini with Retina Display Review',
'description': 'iPad mini with Retina Display review',
'duration': 177, 'duration': 177,
'uploader': 'engadget',
'upload_date': '20131115',
'timestamp': 1384515288,
}, },
'params': {
# m3u8 download
'skip_download': True,
}
}, },
{ {
# From http://on.aol.com/video/how-to-make-a-next-level-fruit-salad-518086247 # From http://on.aol.com/video/how-to-make-a-next-level-fruit-salad-518086247
@@ -44,108 +39,16 @@ class FiveMinIE(InfoExtractor):
}, },
'skip': 'no longer available', 'skip': 'no longer available',
}, },
{
'url': 'http://embed.5min.com/518726732/',
'only_matching': True,
},
{
'url': 'http://delivery.vidible.tv/aol?playList=518013791',
'only_matching': True,
}
] ]
_ERRORS = {
'ErrorVideoNotExist': 'We\'re sorry, but the video you are trying to watch does not exist.',
'ErrorVideoNoLongerAvailable': 'We\'re sorry, but the video you are trying to watch is no longer available.',
'ErrorVideoRejected': 'We\'re sorry, but the video you are trying to watch has been removed.',
'ErrorVideoUserNotGeo': 'We\'re sorry, but the video you are trying to watch cannot be viewed from your current location.',
'ErrorVideoLibraryRestriction': 'We\'re sorry, but the video you are trying to watch is currently unavailable for viewing at this domain.',
'ErrorExposurePermission': 'We\'re sorry, but the video you are trying to watch is currently unavailable for viewing at this domain.',
}
_QUALITIES = {
1: {
'width': 640,
'height': 360,
},
2: {
'width': 854,
'height': 480,
},
4: {
'width': 1280,
'height': 720,
},
8: {
'width': 1920,
'height': 1080,
},
16: {
'width': 640,
'height': 360,
},
32: {
'width': 854,
'height': 480,
},
64: {
'width': 1280,
'height': 720,
},
128: {
'width': 640,
'height': 360,
},
}
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id') return self.url_result('aol-video:%s' % video_id)
sid = mobj.group('sid')
if mobj.group('query'):
qs = compat_parse_qs(mobj.group('query'))
if not qs.get('playList'):
raise ExtractorError('Invalid URL', expected=True)
video_id = qs['playList'][0]
if qs.get('sid'):
sid = qs['sid'][0]
embed_url = 'https://embed.5min.com/playerseed/?playList=%s' % video_id
if not sid:
embed_page = self._download_webpage(embed_url, video_id,
'Downloading embed page')
sid = self._search_regex(r'sid=(\d+)', embed_page, 'sid')
response = self._download_json(
'https://syn.5min.com/handlers/SenseHandler.ashx?' +
compat_urllib_parse_urlencode({
'func': 'GetResults',
'playlist': video_id,
'sid': sid,
'isPlayerSeed': 'true',
'url': embed_url,
}),
video_id)
if not response['success']:
raise ExtractorError(
'%s said: %s' % (
self.IE_NAME,
self._ERRORS.get(response['errorMessage'], response['errorMessage'])),
expected=True)
info = response['binding'][0]
formats = []
parsed_video_url = compat_urllib_parse_urlparse(compat_parse_qs(
compat_urllib_parse_urlparse(info['EmbededURL']).query)['videoUrl'][0])
for rendition in info['Renditions']:
if rendition['RenditionType'] == 'aac' or rendition['RenditionType'] == 'm3u8':
continue
else:
rendition_url = compat_urlparse.urlunparse(parsed_video_url._replace(path=replace_extension(parsed_video_url.path.replace('//', '/%s/' % rendition['ID']), rendition['RenditionType'])))
quality = self._QUALITIES.get(rendition['ID'], {})
formats.append({
'format_id': '%s-%d' % (rendition['RenditionType'], rendition['ID']),
'url': rendition_url,
'width': quality.get('width'),
'height': quality.get('height'),
})
self._sort_formats(formats)
return {
'id': video_id,
'title': info['Title'],
'thumbnail': info.get('ThumbURL'),
'duration': parse_duration(info.get('Duration')),
'formats': formats,
}

View File

@@ -48,7 +48,7 @@ class FlipagramIE(InfoExtractor):
flipagram = video_data['flipagram'] flipagram = video_data['flipagram']
video = flipagram['video'] video = flipagram['video']
json_ld = self._search_json_ld(webpage, video_id, default=False) json_ld = self._search_json_ld(webpage, video_id, default={})
title = json_ld.get('title') or flipagram['captionText'] title = json_ld.get('title') or flipagram['captionText']
description = json_ld.get('description') or flipagram.get('captionText') description = json_ld.get('description') or flipagram.get('captionText')

View File

@@ -5,8 +5,8 @@ from .common import InfoExtractor
class Formula1IE(InfoExtractor): class Formula1IE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?formula1\.com/content/fom-website/en/video/\d{4}/\d{1,2}/(?P<id>.+?)\.html' _VALID_URL = r'https?://(?:www\.)?formula1\.com/(?:content/fom-website/)?en/video/\d{4}/\d{1,2}/(?P<id>.+?)\.html'
_TEST = { _TESTS = [{
'url': 'http://www.formula1.com/content/fom-website/en/video/2016/5/Race_highlights_-_Spain_2016.html', 'url': 'http://www.formula1.com/content/fom-website/en/video/2016/5/Race_highlights_-_Spain_2016.html',
'md5': '8c79e54be72078b26b89e0e111c0502b', 'md5': '8c79e54be72078b26b89e0e111c0502b',
'info_dict': { 'info_dict': {
@@ -15,7 +15,10 @@ class Formula1IE(InfoExtractor):
'title': 'Race highlights - Spain 2016', 'title': 'Race highlights - Spain 2016',
}, },
'add_ie': ['Ooyala'], 'add_ie': ['Ooyala'],
} }, {
'url': 'http://www.formula1.com/en/video/2016/5/Race_highlights_-_Spain_2016.html',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)

View File

@@ -43,14 +43,14 @@ class FourTubeIE(InfoExtractor):
'uploadDate', webpage)) 'uploadDate', webpage))
thumbnail = self._html_search_meta('thumbnailUrl', webpage) thumbnail = self._html_search_meta('thumbnailUrl', webpage)
uploader_id = self._html_search_regex( uploader_id = self._html_search_regex(
r'<a class="img-avatar" href="[^"]+/channels/([^/"]+)" title="Go to [^"]+ page">', r'<a class="item-to-subscribe" href="[^"]+/channels/([^/"]+)" title="Go to [^"]+ page">',
webpage, 'uploader id', fatal=False) webpage, 'uploader id', fatal=False)
uploader = self._html_search_regex( uploader = self._html_search_regex(
r'<a class="img-avatar" href="[^"]+/channels/[^/"]+" title="Go to ([^"]+) page">', r'<a class="item-to-subscribe" href="[^"]+/channels/[^/"]+" title="Go to ([^"]+) page">',
webpage, 'uploader', fatal=False) webpage, 'uploader', fatal=False)
categories_html = self._search_regex( categories_html = self._search_regex(
r'(?s)><i class="icon icon-tag"></i>\s*Categories / Tags\s*.*?<ul class="list">(.*?)</ul>', r'(?s)><i class="icon icon-tag"></i>\s*Categories / Tags\s*.*?<ul class="[^"]*?list[^"]*?">(.*?)</ul>',
webpage, 'categories', fatal=False) webpage, 'categories', fatal=False)
categories = None categories = None
if categories_html: if categories_html:
@@ -59,10 +59,10 @@ class FourTubeIE(InfoExtractor):
r'(?s)<li><a.*?>(.*?)</a>', categories_html)] r'(?s)<li><a.*?>(.*?)</a>', categories_html)]
view_count = str_to_int(self._search_regex( view_count = str_to_int(self._search_regex(
r'<meta itemprop="interactionCount" content="UserPlays:([0-9,]+)">', r'<meta[^>]+itemprop="interactionCount"[^>]+content="UserPlays:([0-9,]+)">',
webpage, 'view count', fatal=False)) webpage, 'view count', fatal=False))
like_count = str_to_int(self._search_regex( like_count = str_to_int(self._search_regex(
r'<meta itemprop="interactionCount" content="UserLikes:([0-9,]+)">', r'<meta[^>]+itemprop="interactionCount"[^>]+content="UserLikes:([0-9,]+)">',
webpage, 'like count', fatal=False)) webpage, 'like count', fatal=False))
duration = parse_duration(self._html_search_meta('duration', webpage)) duration = parse_duration(self._html_search_meta('duration', webpage))

View File

@@ -2,7 +2,10 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import smuggle_url from ..utils import (
smuggle_url,
update_url_query,
)
class FOXIE(InfoExtractor): class FOXIE(InfoExtractor):
@@ -29,11 +32,12 @@ class FOXIE(InfoExtractor):
release_url = self._parse_json(self._search_regex( release_url = self._parse_json(self._search_regex(
r'"fox_pdk_player"\s*:\s*({[^}]+?})', webpage, 'fox_pdk_player'), r'"fox_pdk_player"\s*:\s*({[^}]+?})', webpage, 'fox_pdk_player'),
video_id)['release_url'] + '&switch=http' video_id)['release_url']
return { return {
'_type': 'url_transparent', '_type': 'url_transparent',
'ie_key': 'ThePlatform', 'ie_key': 'ThePlatform',
'url': smuggle_url(release_url, {'force_smil_url': True}), 'url': smuggle_url(update_url_query(
release_url, {'switch': 'http'}), {'force_smil_url': True}),
'id': video_id, 'id': video_id,
} }

View File

@@ -2,104 +2,56 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import (
compat_urlparse,
)
from ..utils import ( from ..utils import (
determine_ext, determine_ext,
int_or_none, unified_strdate,
ExtractorError,
) )
class FranceCultureIE(InfoExtractor): class FranceCultureIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?franceculture\.fr/player/reecouter\?play=(?P<id>[0-9]+)' _VALID_URL = r'https?://(?:www\.)?franceculture\.fr/emissions/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TEST = { _TEST = {
'url': 'http://www.franceculture.fr/player/reecouter?play=4795174', 'url': 'http://www.franceculture.fr/emissions/carnet-nomade/rendez-vous-au-pays-des-geeks',
'info_dict': { 'info_dict': {
'id': '4795174', 'id': 'rendez-vous-au-pays-des-geeks',
'display_id': 'rendez-vous-au-pays-des-geeks',
'ext': 'mp3', 'ext': 'mp3',
'title': 'Rendez-vous au pays des geeks', 'title': 'Rendez-vous au pays des geeks',
'alt_title': 'Carnet nomade | 13-14', 'thumbnail': 're:^https?://.*\\.jpg$',
'vcodec': 'none',
'upload_date': '20140301', 'upload_date': '20140301',
'thumbnail': r're:^http://static\.franceculture\.fr/.*/images/player/Carnet-nomade\.jpg$', 'vcodec': 'none',
'description': 'startswith:Avec :Jean-Baptiste Péretié pour son documentaire sur Arte "La revanche',
'timestamp': 1393700400,
} }
} }
def _extract_from_player(self, url, video_id): def _real_extract(self, url):
webpage = self._download_webpage(url, video_id) display_id = self._match_id(url)
video_path = self._search_regex( webpage = self._download_webpage(url, display_id)
r'<a id="player".*?href="([^"]+)"', webpage, 'video path')
video_url = compat_urlparse.urljoin(url, video_path) video_url = self._search_regex(
timestamp = int_or_none(self._search_regex( r'(?s)<div[^>]+class="[^"]*?title-zone-diffusion[^"]*?"[^>]*>.*?<a[^>]+href="([^"]+)"',
r'<a id="player".*?data-date="([0-9]+)"', webpage, 'video path')
title = self._og_search_title(webpage)
upload_date = unified_strdate(self._search_regex(
'(?s)<div[^>]+class="date"[^>]*>.*?<span[^>]+class="inner"[^>]*>([^<]+)<',
webpage, 'upload date', fatal=False)) webpage, 'upload date', fatal=False))
thumbnail = self._search_regex( thumbnail = self._search_regex(
r'<a id="player".*?>\s+<img src="([^"]+)"', r'(?s)<figure[^>]+itemtype="https://schema.org/ImageObject"[^>]*>.*?<img[^>]+data-pagespeed-(?:lazy|high-res)-src="([^"]+)"',
webpage, 'thumbnail', fatal=False) webpage, 'thumbnail', fatal=False)
display_id = self._search_regex(
r'<span class="path-diffusion">emission-(.*?)</span>', webpage, 'display_id')
title = self._html_search_regex(
r'<span class="title-diffusion">(.*?)</span>', webpage, 'title')
alt_title = self._html_search_regex(
r'<span class="title">(.*?)</span>',
webpage, 'alt_title', fatal=False)
description = self._html_search_regex(
r'<span class="description">(.*?)</span>',
webpage, 'description', fatal=False)
uploader = self._html_search_regex( uploader = self._html_search_regex(
r'(?s)<div id="emission".*?<span class="author">(.*?)</span>', r'(?s)<div id="emission".*?<span class="author">(.*?)</span>',
webpage, 'uploader', default=None) webpage, 'uploader', default=None)
vcodec = 'none' if determine_ext(video_url.lower()) == 'mp3' else None vcodec = 'none' if determine_ext(video_url.lower()) == 'mp3' else None
return { return {
'id': video_id, 'id': display_id,
'display_id': display_id,
'url': video_url, 'url': video_url,
'title': title,
'thumbnail': thumbnail,
'vcodec': vcodec, 'vcodec': vcodec,
'uploader': uploader, 'uploader': uploader,
'timestamp': timestamp, 'upload_date': upload_date,
'title': title,
'alt_title': alt_title,
'thumbnail': thumbnail,
'description': description,
'display_id': display_id,
} }
def _real_extract(self, url):
video_id = self._match_id(url)
return self._extract_from_player(url, video_id)
class FranceCultureEmissionIE(FranceCultureIE):
_VALID_URL = r'https?://(?:www\.)?franceculture\.fr/emission-(?P<id>[^?#]+)'
_TEST = {
'url': 'http://www.franceculture.fr/emission-les-carnets-de-la-creation-jean-gabriel-periot-cineaste-2015-10-13',
'info_dict': {
'title': 'Jean-Gabriel Périot, cinéaste',
'alt_title': 'Les Carnets de la création',
'id': '5093239',
'display_id': 'les-carnets-de-la-creation-jean-gabriel-periot-cineaste-2015-10-13',
'ext': 'mp3',
'timestamp': 1444762500,
'upload_date': '20151013',
'description': 'startswith:Aujourd\'hui dans "Les carnets de la création", le cinéaste',
},
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_path = self._html_search_regex(
r'<a class="rf-player-open".*?href="([^"]+)"', webpage, 'video path', 'no_path_player')
if video_path == 'no_path_player':
raise ExtractorError('no player : no sound in this page.', expected=True)
new_id = self._search_regex('play=(?P<id>[0-9]+)', video_path, 'new_id', group='id')
video_url = compat_urlparse.urljoin(url, video_path)
return self._extract_from_player(video_url, new_id)

View File

@@ -131,7 +131,7 @@ class PluzzIE(FranceTVBaseInfoExtractor):
class FranceTvInfoIE(FranceTVBaseInfoExtractor): class FranceTvInfoIE(FranceTVBaseInfoExtractor):
IE_NAME = 'francetvinfo.fr' IE_NAME = 'francetvinfo.fr'
_VALID_URL = r'https?://(?:www|mobile|france3-regions)\.francetvinfo\.fr/.*/(?P<title>.+)\.html' _VALID_URL = r'https?://(?:www|mobile|france3-regions)\.francetvinfo\.fr/(?:[^/]+/)*(?P<title>[^/?#&.]+)'
_TESTS = [{ _TESTS = [{
'url': 'http://www.francetvinfo.fr/replay-jt/france-3/soir-3/jt-grand-soir-3-lundi-26-aout-2013_393427.html', 'url': 'http://www.francetvinfo.fr/replay-jt/france-3/soir-3/jt-grand-soir-3-lundi-26-aout-2013_393427.html',
@@ -206,6 +206,9 @@ class FranceTvInfoIE(FranceTVBaseInfoExtractor):
'uploader_id': 'x2q2ez', 'uploader_id': 'x2q2ez',
}, },
'add_ie': ['Dailymotion'], 'add_ie': ['Dailymotion'],
}, {
'url': 'http://france3-regions.francetvinfo.fr/limousin/emissions/jt-1213-limousin',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -0,0 +1,70 @@
# coding: utf-8
from __future__ import unicode_literals
from .adobepass import AdobePassIE
from ..utils import (
update_url_query,
extract_attributes,
parse_age_limit,
smuggle_url,
)
class FXNetworksIE(AdobePassIE):
_VALID_URL = r'https?://(?:www\.)?(?:fxnetworks|simpsonsworld)\.com/video/(?P<id>\d+)'
_TESTS = [{
'url': 'http://www.fxnetworks.com/video/719841347694',
'md5': '1447d4722e42ebca19e5232ab93abb22',
'info_dict': {
'id': '719841347694',
'ext': 'mp4',
'title': 'Vanpage',
'description': 'F*ck settling down. You\'re the Worst returns for an all new season August 31st on FXX.',
'age_limit': 14,
'uploader': 'NEWA-FNG-FX',
'upload_date': '20160706',
'timestamp': 1467844741,
},
'add_ie': ['ThePlatform'],
}, {
'url': 'http://www.simpsonsworld.com/video/716094019682',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
if 'The content you are trying to access is not available in your region.' in webpage:
self.raise_geo_restricted()
video_data = extract_attributes(self._search_regex(
r'(<a.+?rel="http://link\.theplatform\.com/s/.+?</a>)', webpage, 'video data'))
player_type = self._search_regex(r'playerType\s*=\s*[\'"]([^\'"]+)', webpage, 'player type', default=None)
release_url = video_data['rel']
title = video_data['data-title']
rating = video_data.get('data-rating')
query = {
'mbr': 'true',
}
if player_type == 'movies':
query.update({
'manifest': 'm3u',
})
else:
query.update({
'switch': 'http',
})
if video_data.get('data-req-auth') == '1':
resource = self._get_mvpd_resource(
video_data['data-channel'], title,
video_data.get('data-guid'), rating)
query['auth'] = self._extract_mvpd_auth(url, video_id, 'fx', resource)
return {
'_type': 'url_transparent',
'id': video_id,
'title': title,
'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),
'thumbnail': video_data.get('data-large-thumb'),
'age_limit': parse_age_limit(rating),
'ie_key': 'ThePlatform',
}

View File

@@ -1,76 +0,0 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
xpath_text,
xpath_with_ns,
)
from .youtube import YoutubeIE
class GamekingsIE(InfoExtractor):
_VALID_URL = r'https?://www\.gamekings\.nl/(?:videos|nieuws)/(?P<id>[^/]+)'
_TESTS = [{
# YouTube embed video
'url': 'http://www.gamekings.nl/videos/phoenix-wright-ace-attorney-dual-destinies-review/',
'md5': '5208d3a17adeaef829a7861887cb9029',
'info_dict': {
'id': 'HkSQKetlGOU',
'ext': 'mp4',
'title': 'Phoenix Wright: Ace Attorney - Dual Destinies Review',
'description': 'md5:db88c0e7f47e9ea50df3271b9dc72e1d',
'thumbnail': 're:^https?://.*\.jpg$',
'uploader_id': 'UCJugRGo4STYMeFr5RoOShtQ',
'uploader': 'Gamekings Vault',
'upload_date': '20151123',
},
'add_ie': ['Youtube'],
}, {
# vimeo video
'url': 'http://www.gamekings.nl/videos/the-legend-of-zelda-majoras-mask/',
'md5': '12bf04dfd238e70058046937657ea68d',
'info_dict': {
'id': 'the-legend-of-zelda-majoras-mask',
'ext': 'mp4',
'title': 'The Legend of Zelda: Majoras Mask',
'description': 'md5:9917825fe0e9f4057601fe1e38860de3',
'thumbnail': 're:^https?://.*\.jpg$',
},
}, {
'url': 'http://www.gamekings.nl/nieuws/gamekings-extra-shelly-en-david-bereiden-zich-voor-op-de-livestream/',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
playlist_id = self._search_regex(
r'gogoVideo\([^,]+,\s*"([^"]+)', webpage, 'playlist id')
# Check if a YouTube embed is used
if YoutubeIE.suitable(playlist_id):
return self.url_result(playlist_id, ie='Youtube')
playlist = self._download_xml(
'http://www.gamekings.tv/wp-content/themes/gk2010/rss_playlist.php?id=%s' % playlist_id,
video_id)
NS_MAP = {
'jwplayer': 'http://rss.jwpcdn.com/'
}
item = playlist.find('./channel/item')
thumbnail = xpath_text(item, xpath_with_ns('./jwplayer:image', NS_MAP), 'thumbnail')
video_url = item.find(xpath_with_ns('./jwplayer:source', NS_MAP)).get('file')
return {
'id': video_id,
'url': video_url,
'title': self._og_search_title(webpage),
'description': self._og_search_description(webpage),
'thumbnail': thumbnail,
}

View File

@@ -71,6 +71,9 @@ from .vessel import VesselIE
from .kaltura import KalturaIE from .kaltura import KalturaIE
from .eagleplatform import EaglePlatformIE from .eagleplatform import EaglePlatformIE
from .facebook import FacebookIE from .facebook import FacebookIE
from .soundcloud import SoundcloudIE
from .vbox7 import Vbox7IE
from .dbtv import DBTVIE
class GenericIE(InfoExtractor): class GenericIE(InfoExtractor):
@@ -474,7 +477,7 @@ class GenericIE(InfoExtractor):
'url': 'http://www.vestifinance.ru/articles/25753', 'url': 'http://www.vestifinance.ru/articles/25753',
'info_dict': { 'info_dict': {
'id': '25753', 'id': '25753',
'title': 'Вести Экономика ― Прямые трансляции с Форума-выставки "Госзаказ-2013"', 'title': 'Прямые трансляции с Форума-выставки "Госзаказ-2013"',
}, },
'playlist': [{ 'playlist': [{
'info_dict': { 'info_dict': {
@@ -641,6 +644,8 @@ class GenericIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Key and Peele|October 10, 2012|2|203|Liam Neesons - Uncensored', 'title': 'Key and Peele|October 10, 2012|2|203|Liam Neesons - Uncensored',
'description': 'Two valets share their love for movie star Liam Neesons.', 'description': 'Two valets share their love for movie star Liam Neesons.',
'timestamp': 1349922600,
'upload_date': '20121011',
}, },
}, },
# YouTube embed via <data-embed-url=""> # YouTube embed via <data-embed-url="">
@@ -782,6 +787,15 @@ class GenericIE(InfoExtractor):
'upload_date': '20141029', 'upload_date': '20141029',
} }
}, },
# Soundcloud multiple embeds
{
'url': 'http://www.guitarplayer.com/lessons/1014/legato-workout-one-hour-to-more-fluid-performance---tab/52809',
'info_dict': {
'id': '52809',
'title': 'Guitar Essentials: Legato Workout—One-Hour to Fluid Performance | TAB + AUDIO',
},
'playlist_mincount': 7,
},
# Livestream embed # Livestream embed
{ {
'url': 'http://www.esa.int/Our_Activities/Space_Science/Rosetta/Philae_comet_touch-down_webcast', 'url': 'http://www.esa.int/Our_Activities/Space_Science/Rosetta/Philae_comet_touch-down_webcast',
@@ -857,6 +871,7 @@ class GenericIE(InfoExtractor):
'description': 'md5:601cb790edd05908957dae8aaa866465', 'description': 'md5:601cb790edd05908957dae8aaa866465',
'upload_date': '20150220', 'upload_date': '20150220',
}, },
'skip': 'All The Daily Show URLs now redirect to http://www.cc.com/shows/',
}, },
# jwplayer YouTube # jwplayer YouTube
{ {
@@ -1360,6 +1375,27 @@ class GenericIE(InfoExtractor):
}, },
'add_ie': [ArkenaIE.ie_key()], 'add_ie': [ArkenaIE.ie_key()],
}, },
{
'url': 'http://nova.bg/news/view/2016/08/16/156543/%D0%BD%D0%B0-%D0%BA%D0%BE%D1%81%D1%8A%D0%BC-%D0%BE%D1%82-%D0%B2%D0%B7%D1%80%D0%B8%D0%B2-%D0%BE%D1%82%D1%86%D0%B5%D0%BF%D0%B8%D1%85%D0%B0-%D1%86%D1%8F%D0%BB-%D0%BA%D0%B2%D0%B0%D1%80%D1%82%D0%B0%D0%BB-%D0%B7%D0%B0%D1%80%D0%B0%D0%B4%D0%B8-%D0%B8%D0%B7%D1%82%D0%B8%D1%87%D0%B0%D0%BD%D0%B5-%D0%BD%D0%B0-%D0%B3%D0%B0%D0%B7-%D0%B2-%D0%BF%D0%BB%D0%BE%D0%B2%D0%B4%D0%B8%D0%B2/',
'info_dict': {
'id': '1c7141f46c',
'ext': 'mp4',
'title': 'НА КОСЪМ ОТ ВЗРИВ: Изтичане на газ на бензиностанция в Пловдив',
},
'params': {
'skip_download': True,
},
'add_ie': [Vbox7IE.ie_key()],
},
{
# DBTV embeds
'url': 'http://www.dagbladet.no/2016/02/23/nyheter/nordlys/ski/troms/ver/43254897/',
'info_dict': {
'id': '43254897',
'title': 'Etter ett års planlegging, klaffet endelig alt: - Jeg måtte ta en liten dans',
},
'playlist_mincount': 3,
},
# { # {
# # TODO: find another test # # TODO: find another test
# # http://schema.org/VideoObject # # http://schema.org/VideoObject
@@ -1996,12 +2032,9 @@ class GenericIE(InfoExtractor):
return self.url_result(myvi_url) return self.url_result(myvi_url)
# Look for embedded soundcloud player # Look for embedded soundcloud player
mobj = re.search( soundcloud_urls = SoundcloudIE._extract_urls(webpage)
r'<iframe\s+(?:[a-zA-Z0-9_-]+="[^"]+"\s+)*src="(?P<url>https?://(?:w\.)?soundcloud\.com/player[^"]+)"', if soundcloud_urls:
webpage) return _playlist_from_matches(soundcloud_urls, getter=unescapeHTML, ie=SoundcloudIE.ie_key())
if mobj is not None:
url = unescapeHTML(mobj.group('url'))
return self.url_result(url)
# Look for embedded mtvservices player # Look for embedded mtvservices player
mtvservices_url = MTVServicesEmbeddedIE._extract_url(webpage) mtvservices_url = MTVServicesEmbeddedIE._extract_url(webpage)
@@ -2197,6 +2230,14 @@ class GenericIE(InfoExtractor):
return self.url_result( return self.url_result(
self._proto_relative_url(unescapeHTML(mobj.group(1))), 'Vine') self._proto_relative_url(unescapeHTML(mobj.group(1))), 'Vine')
# Look for VODPlatform embeds
mobj = re.search(
r'<iframe[^>]+src=[\'"]((?:https?:)?//(?:www\.)?vod-platform\.net/embed/[^/?#]+)',
webpage)
if mobj is not None:
return self.url_result(
self._proto_relative_url(unescapeHTML(mobj.group(1))), 'VODPlatform')
# Look for Instagram embeds # Look for Instagram embeds
instagram_embed_url = InstagramIE._extract_embed_url(webpage) instagram_embed_url = InstagramIE._extract_embed_url(webpage)
if instagram_embed_url is not None: if instagram_embed_url is not None:
@@ -2221,10 +2262,20 @@ class GenericIE(InfoExtractor):
'uploader': video_uploader, 'uploader': video_uploader,
} }
# Look for VBOX7 embeds
vbox7_url = Vbox7IE._extract_url(webpage)
if vbox7_url:
return self.url_result(vbox7_url, Vbox7IE.ie_key())
# Look for DBTV embeds
dbtv_urls = DBTVIE._extract_urls(webpage)
if dbtv_urls:
return _playlist_from_matches(dbtv_urls, ie=DBTVIE.ie_key())
# Looking for http://schema.org/VideoObject # Looking for http://schema.org/VideoObject
json_ld = self._search_json_ld( json_ld = self._search_json_ld(
webpage, video_id, default=None, expected_type='VideoObject') webpage, video_id, default={}, expected_type='VideoObject')
if json_ld and json_ld.get('url'): if json_ld.get('url'):
info_dict.update({ info_dict.update({
'title': video_title or info_dict['title'], 'title': video_title or info_dict['title'],
'description': video_description, 'description': video_description,

View File

@@ -1,48 +0,0 @@
from __future__ import unicode_literals
from .common import InfoExtractor
class GoldenMoustacheIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?goldenmoustache\.com/(?P<display_id>[\w-]+)-(?P<id>\d+)'
_TESTS = [{
'url': 'http://www.goldenmoustache.com/suricate-le-poker-3700/',
'md5': '0f904432fa07da5054d6c8beb5efb51a',
'info_dict': {
'id': '3700',
'ext': 'mp4',
'title': 'Suricate - Le Poker',
'description': 'md5:3d1f242f44f8c8cb0a106f1fd08e5dc9',
'thumbnail': 're:^https?://.*\.jpg$',
}
}, {
'url': 'http://www.goldenmoustache.com/le-lab-tout-effacer-mc-fly-et-carlito-55249/',
'md5': '27f0c50fb4dd5f01dc9082fc67cd5700',
'info_dict': {
'id': '55249',
'ext': 'mp4',
'title': 'Le LAB - Tout Effacer (Mc Fly et Carlito)',
'description': 'md5:9b7fbf11023fb2250bd4b185e3de3b2a',
'thumbnail': 're:^https?://.*\.(?:png|jpg)$',
}
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_url = self._html_search_regex(
r'data-src-type="mp4" data-src="([^"]+)"', webpage, 'video URL')
title = self._html_search_regex(
r'<title>(.*?)(?: - Golden Moustache)?</title>', webpage, 'title')
thumbnail = self._og_search_thumbnail(webpage)
description = self._og_search_description(webpage)
return {
'id': video_id,
'url': video_url,
'ext': 'mp4',
'title': title,
'description': description,
'thumbnail': thumbnail,
}

View File

@@ -0,0 +1,79 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
int_or_none,
js_to_json,
smuggle_url,
)
class HGTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?hgtv\.ca/[^/]+/video/(?P<id>[^/]+)/video.html'
_TEST = {
'url': 'http://www.hgtv.ca/homefree/video/overnight-success/video.html?v=738081859718&p=1&s=da#video',
'md5': '',
'info_dict': {
'id': 'aFH__I_5FBOX',
'ext': 'mp4',
'title': 'Overnight Success',
'description': 'After weeks of hard work, high stakes, breakdowns and pep talks, the final 2 contestants compete to win the ultimate dream.',
'uploader': 'SHWM-NEW',
'timestamp': 1470320034,
'upload_date': '20160804',
},
'params': {
# m3u8 download
'skip_download': True,
},
}
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
embed_vars = self._parse_json(self._search_regex(
r'(?s)embed_vars\s*=\s*({.*?});',
webpage, 'embed vars'), display_id, js_to_json)
return {
'_type': 'url_transparent',
'url': smuggle_url(
'http://link.theplatform.com/s/dtjsEC/%s?mbr=true&manifest=m3u' % embed_vars['pid'], {
'force_smil_url': True
}),
'series': embed_vars.get('show'),
'season_number': int_or_none(embed_vars.get('season')),
'episode_number': int_or_none(embed_vars.get('episode')),
'ie_key': 'ThePlatform',
}
class HGTVComShowIE(InfoExtractor):
IE_NAME = 'hgtv.com:show'
_VALID_URL = r'https?://(?:www\.)?hgtv\.com/shows/[^/]+/(?P<id>[^/?#&]+)'
_TEST = {
'url': 'http://www.hgtv.com/shows/flip-or-flop/flip-or-flop-full-episodes-videos',
'info_dict': {
'id': 'flip-or-flop-full-episodes-videos',
'title': 'Flip or Flop Full Episodes',
},
'playlist_mincount': 15,
}
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
config = self._parse_json(
self._search_regex(
r'(?s)data-module=["\']video["\'][^>]*>.*?<script[^>]+type=["\']text/x-config["\'][^>]*>(.+?)</script',
webpage, 'video config'),
display_id)['channels'][0]
entries = [
self.url_result(video['releaseUrl'])
for video in config['videos'] if video.get('releaseUrl')]
return self.playlist_result(
entries, display_id, config.get('title'), config.get('description'))

View File

@@ -50,12 +50,10 @@ class ImgurIE(InfoExtractor):
webpage = self._download_webpage( webpage = self._download_webpage(
compat_urlparse.urljoin(url, video_id), video_id) compat_urlparse.urljoin(url, video_id), video_id)
width = int_or_none(self._search_regex( width = int_or_none(self._og_search_property(
r'<param name="width" value="([0-9]+)"', 'video:width', webpage, default=None))
webpage, 'width', fatal=False)) height = int_or_none(self._og_search_property(
height = int_or_none(self._search_regex( 'video:height', webpage, default=None))
r'<param name="height" value="([0-9]+)"',
webpage, 'height', fatal=False))
video_elements = self._search_regex( video_elements = self._search_regex(
r'(?s)<div class="video-elements">(.*?)</div>', r'(?s)<div class="video-elements">(.*?)</div>',

View File

@@ -36,7 +36,6 @@ class InstagramIE(InfoExtractor):
'info_dict': { 'info_dict': {
'id': 'BA-pQFBG8HZ', 'id': 'BA-pQFBG8HZ',
'ext': 'mp4', 'ext': 'mp4',
'uploader_id': 'britneyspears',
'title': 'Video by britneyspears', 'title': 'Video by britneyspears',
'thumbnail': 're:^https?://.*\.jpg', 'thumbnail': 're:^https?://.*\.jpg',
'timestamp': 1453760977, 'timestamp': 1453760977,

View File

@@ -4,10 +4,12 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse
from ..utils import ( from ..utils import (
determine_ext, determine_ext,
float_or_none, float_or_none,
int_or_none, int_or_none,
mimetype2ext,
) )
@@ -28,74 +30,86 @@ class JWPlatformBaseIE(InfoExtractor):
return self._parse_jwplayer_data( return self._parse_jwplayer_data(
jwplayer_data, video_id, *args, **kwargs) jwplayer_data, video_id, *args, **kwargs)
def _parse_jwplayer_data(self, jwplayer_data, video_id, require_title=True, m3u8_id=None, rtmp_params=None): def _parse_jwplayer_data(self, jwplayer_data, video_id=None, require_title=True, m3u8_id=None, rtmp_params=None, base_url=None):
# JWPlayer backward compatibility: flattened playlists # JWPlayer backward compatibility: flattened playlists
# https://github.com/jwplayer/jwplayer/blob/v7.4.3/src/js/api/config.js#L81-L96 # https://github.com/jwplayer/jwplayer/blob/v7.4.3/src/js/api/config.js#L81-L96
if 'playlist' not in jwplayer_data: if 'playlist' not in jwplayer_data:
jwplayer_data = {'playlist': [jwplayer_data]} jwplayer_data = {'playlist': [jwplayer_data]}
video_data = jwplayer_data['playlist'][0] entries = []
for video_data in jwplayer_data['playlist']:
# JWPlayer backward compatibility: flattened sources
# https://github.com/jwplayer/jwplayer/blob/v7.4.3/src/js/playlist/item.js#L29-L35
if 'sources' not in video_data:
video_data['sources'] = [video_data]
# JWPlayer backward compatibility: flattened sources this_video_id = video_id or video_data['mediaid']
# https://github.com/jwplayer/jwplayer/blob/v7.4.3/src/js/playlist/item.js#L29-L35
if 'sources' not in video_data:
video_data['sources'] = [video_data]
formats = [] formats = []
for source in video_data['sources']: for source in video_data['sources']:
source_url = self._proto_relative_url(source['file']) source_url = self._proto_relative_url(source['file'])
source_type = source.get('type') or '' if base_url:
if source_type in ('application/vnd.apple.mpegurl', 'hls') or determine_ext(source_url) == 'm3u8': source_url = compat_urlparse.urljoin(base_url, source_url)
formats.extend(self._extract_m3u8_formats( source_type = source.get('type') or ''
source_url, video_id, 'mp4', 'm3u8_native', m3u8_id=m3u8_id, fatal=False)) ext = mimetype2ext(source_type) or determine_ext(source_url)
elif source_type.startswith('audio'): if source_type == 'hls' or ext == 'm3u8':
formats.append({ formats.extend(self._extract_m3u8_formats(
'url': source_url, source_url, this_video_id, 'mp4', 'm3u8_native', m3u8_id=m3u8_id, fatal=False))
'vcodec': 'none', # https://github.com/jwplayer/jwplayer/blob/master/src/js/providers/default.js#L67
}) elif source_type.startswith('audio') or ext in ('oga', 'aac', 'mp3', 'mpeg', 'vorbis'):
else: formats.append({
a_format = { 'url': source_url,
'url': source_url, 'vcodec': 'none',
'width': int_or_none(source.get('width')), 'ext': ext,
'height': int_or_none(source.get('height')),
}
if source_url.startswith('rtmp'):
a_format['ext'] = 'flv',
# See com/longtailvideo/jwplayer/media/RTMPMediaProvider.as
# of jwplayer.flash.swf
rtmp_url_parts = re.split(
r'((?:mp4|mp3|flv):)', source_url, 1)
if len(rtmp_url_parts) == 3:
rtmp_url, prefix, play_path = rtmp_url_parts
a_format.update({
'url': rtmp_url,
'play_path': prefix + play_path,
})
if rtmp_params:
a_format.update(rtmp_params)
formats.append(a_format)
self._sort_formats(formats)
subtitles = {}
tracks = video_data.get('tracks')
if tracks and isinstance(tracks, list):
for track in tracks:
if track.get('file') and track.get('kind') == 'captions':
subtitles.setdefault(track.get('label') or 'en', []).append({
'url': self._proto_relative_url(track['file'])
}) })
else:
a_format = {
'url': source_url,
'width': int_or_none(source.get('width')),
'height': int_or_none(source.get('height')),
'ext': ext,
}
if source_url.startswith('rtmp'):
a_format['ext'] = 'flv'
return { # See com/longtailvideo/jwplayer/media/RTMPMediaProvider.as
'id': video_id, # of jwplayer.flash.swf
'title': video_data['title'] if require_title else video_data.get('title'), rtmp_url_parts = re.split(
'description': video_data.get('description'), r'((?:mp4|mp3|flv):)', source_url, 1)
'thumbnail': self._proto_relative_url(video_data.get('image')), if len(rtmp_url_parts) == 3:
'timestamp': int_or_none(video_data.get('pubdate')), rtmp_url, prefix, play_path = rtmp_url_parts
'duration': float_or_none(jwplayer_data.get('duration')), a_format.update({
'subtitles': subtitles, 'url': rtmp_url,
'formats': formats, 'play_path': prefix + play_path,
} })
if rtmp_params:
a_format.update(rtmp_params)
formats.append(a_format)
self._sort_formats(formats)
subtitles = {}
tracks = video_data.get('tracks')
if tracks and isinstance(tracks, list):
for track in tracks:
if track.get('file') and track.get('kind') == 'captions':
subtitles.setdefault(track.get('label') or 'en', []).append({
'url': self._proto_relative_url(track['file'])
})
entries.append({
'id': this_video_id,
'title': video_data['title'] if require_title else video_data.get('title'),
'description': video_data.get('description'),
'thumbnail': self._proto_relative_url(video_data.get('image')),
'timestamp': int_or_none(video_data.get('pubdate')),
'duration': float_or_none(jwplayer_data.get('duration')),
'subtitles': subtitles,
'formats': formats,
})
if len(entries) == 1:
return entries[0]
else:
return self.playlist_result(entries)
class JWPlatformIE(JWPlatformBaseIE): class JWPlatformIE(JWPlatformBaseIE):

View File

@@ -62,6 +62,11 @@ class KalturaIE(InfoExtractor):
{ {
'url': 'https://cdnapisec.kaltura.com/html5/html5lib/v2.30.2/mwEmbedFrame.php/p/1337/uiconf_id/20540612/entry_id/1_sf5ovm7u?wid=_243342', 'url': 'https://cdnapisec.kaltura.com/html5/html5lib/v2.30.2/mwEmbedFrame.php/p/1337/uiconf_id/20540612/entry_id/1_sf5ovm7u?wid=_243342',
'only_matching': True, 'only_matching': True,
},
{
# video with subtitles
'url': 'kaltura:111032:1_cw786r8q',
'only_matching': True,
} }
] ]
@@ -130,7 +135,6 @@ class KalturaIE(InfoExtractor):
video_id, actions, service_url, note='Downloading Kaltura signature')['ks'] video_id, actions, service_url, note='Downloading Kaltura signature')['ks']
def _get_video_info(self, video_id, partner_id, service_url=None): def _get_video_info(self, video_id, partner_id, service_url=None):
signature = self._get_kaltura_signature(video_id, partner_id, service_url)
actions = [ actions = [
{ {
'action': 'null', 'action': 'null',
@@ -138,18 +142,30 @@ class KalturaIE(InfoExtractor):
'clientTag': 'kdp:v3.8.5', 'clientTag': 'kdp:v3.8.5',
'format': 1, # JSON, 2 = XML, 3 = PHP 'format': 1, # JSON, 2 = XML, 3 = PHP
'service': 'multirequest', 'service': 'multirequest',
'ks': signature, },
{
'expiry': 86400,
'service': 'session',
'action': 'startWidgetSession',
'widgetId': '_%s' % partner_id,
}, },
{ {
'action': 'get', 'action': 'get',
'entryId': video_id, 'entryId': video_id,
'service': 'baseentry', 'service': 'baseentry',
'version': '-1', 'ks': '{1:result:ks}',
}, },
{ {
'action': 'getbyentryid', 'action': 'getbyentryid',
'entryId': video_id, 'entryId': video_id,
'service': 'flavorAsset', 'service': 'flavorAsset',
'ks': '{1:result:ks}',
},
{
'action': 'list',
'filter:entryIdEqual': video_id,
'service': 'caption_captionasset',
'ks': '{1:result:ks}',
}, },
] ]
return self._kaltura_api_call( return self._kaltura_api_call(
@@ -161,8 +177,9 @@ class KalturaIE(InfoExtractor):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
partner_id, entry_id = mobj.group('partner_id', 'id') partner_id, entry_id = mobj.group('partner_id', 'id')
ks = None ks = None
captions = None
if partner_id and entry_id: if partner_id and entry_id:
info, flavor_assets = self._get_video_info(entry_id, partner_id, smuggled_data.get('service_url')) _, info, flavor_assets, captions = self._get_video_info(entry_id, partner_id, smuggled_data.get('service_url'))
else: else:
path, query = mobj.group('path', 'query') path, query = mobj.group('path', 'query')
if not path and not query: if not path and not query:
@@ -181,7 +198,7 @@ class KalturaIE(InfoExtractor):
raise ExtractorError('Invalid URL', expected=True) raise ExtractorError('Invalid URL', expected=True)
if 'entry_id' in params: if 'entry_id' in params:
entry_id = params['entry_id'][0] entry_id = params['entry_id'][0]
info, flavor_assets = self._get_video_info(entry_id, partner_id) _, info, flavor_assets, captions = self._get_video_info(entry_id, partner_id)
elif 'uiconf_id' in params and 'flashvars[referenceId]' in params: elif 'uiconf_id' in params and 'flashvars[referenceId]' in params:
reference_id = params['flashvars[referenceId]'][0] reference_id = params['flashvars[referenceId]'][0]
webpage = self._download_webpage(url, reference_id) webpage = self._download_webpage(url, reference_id)
@@ -217,7 +234,7 @@ class KalturaIE(InfoExtractor):
formats = [] formats = []
for f in flavor_assets: for f in flavor_assets:
# Continue if asset is not ready # Continue if asset is not ready
if f['status'] != 2: if f.get('status') != 2:
continue continue
video_url = sign_url( video_url = sign_url(
'%s/flavorId/%s' % (data_url, f['id'])) '%s/flavorId/%s' % (data_url, f['id']))
@@ -240,13 +257,24 @@ class KalturaIE(InfoExtractor):
m3u8_url, entry_id, 'mp4', 'm3u8_native', m3u8_url, entry_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False)) m3u8_id='hls', fatal=False))
self._check_formats(formats, entry_id)
self._sort_formats(formats) self._sort_formats(formats)
subtitles = {}
if captions:
for caption in captions.get('objects', []):
# Continue if caption is not ready
if f.get('status') != 2:
continue
subtitles.setdefault(caption.get('languageCode') or caption.get('language'), []).append({
'url': '%s/api_v3/service/caption_captionasset/action/serve/captionAssetId/%s' % (self._SERVICE_URL, caption['id']),
'ext': caption.get('fileExt'),
})
return { return {
'id': entry_id, 'id': entry_id,
'title': info['name'], 'title': info['name'],
'formats': formats, 'formats': formats,
'subtitles': subtitles,
'description': clean_html(info.get('description')), 'description': clean_html(info.get('description')),
'thumbnail': info.get('thumbnailUrl'), 'thumbnail': info.get('thumbnailUrl'),
'duration': info.get('duration'), 'duration': info.get('duration'),

View File

@@ -3,64 +3,126 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..aes import aes_decrypt_text
from ..compat import (
compat_str,
compat_urllib_parse_unquote,
)
from ..utils import ( from ..utils import (
sanitized_Request, determine_ext,
url_basename, ExtractorError,
int_or_none,
str_to_int,
strip_or_none,
) )
class KeezMoviesIE(InfoExtractor): class KeezMoviesIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?keezmovies\.com/video/.+?(?P<id>[0-9]+)(?:[/?&]|$)' _VALID_URL = r'https?://(?:www\.)?keezmovies\.com/video/(?:(?P<display_id>[^/]+)-)?(?P<id>\d+)'
_TEST = { _TESTS = [{
'url': 'http://www.keezmovies.com/video/petite-asian-lady-mai-playing-in-bathtub-1214711', 'url': 'http://www.keezmovies.com/video/petite-asian-lady-mai-playing-in-bathtub-1214711',
'md5': '1c1e75d22ffa53320f45eeb07bc4cdc0', 'md5': '1c1e75d22ffa53320f45eeb07bc4cdc0',
'info_dict': { 'info_dict': {
'id': '1214711', 'id': '1214711',
'display_id': 'petite-asian-lady-mai-playing-in-bathtub',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Petite Asian Lady Mai Playing In Bathtub', 'title': 'Petite Asian Lady Mai Playing In Bathtub',
'age_limit': 18,
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
'view_count': int,
'age_limit': 18,
} }
} }, {
'url': 'http://www.keezmovies.com/video/1214711',
'only_matching': True,
}]
def _real_extract(self, url): def _extract_info(self, url):
video_id = self._match_id(url) mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
display_id = (mobj.group('display_id')
if 'display_id' in mobj.groupdict()
else None) or mobj.group('id')
req = sanitized_Request(url) webpage = self._download_webpage(
req.add_header('Cookie', 'age_verified=1') url, display_id, headers={'Cookie': 'age_verified=1'})
webpage = self._download_webpage(req, video_id)
# embedded video
mobj = re.search(r'href="([^"]+)"></iframe>', webpage)
if mobj:
embedded_url = mobj.group(1)
return self.url_result(embedded_url)
video_title = self._html_search_regex(
r'<h1 [^>]*>([^<]+)', webpage, 'title')
flashvars = self._parse_json(self._search_regex(
r'var\s+flashvars\s*=\s*([^;]+);', webpage, 'flashvars'), video_id)
formats = [] formats = []
for height in (180, 240, 480): format_urls = set()
if flashvars.get('quality_%dp' % height):
video_url = flashvars['quality_%dp' % height]
a_format = {
'url': video_url,
'height': height,
'format_id': '%dp' % height,
}
filename_parts = url_basename(video_url).split('_')
if len(filename_parts) >= 2 and re.match(r'\d+[Kk]', filename_parts[1]):
a_format['tbr'] = int(filename_parts[1][:-1])
formats.append(a_format)
age_limit = self._rta_search(webpage) title = None
thumbnail = None
duration = None
encrypted = False
return { def extract_format(format_url, height=None):
if not isinstance(format_url, compat_str) or not format_url.startswith('http'):
return
if format_url in format_urls:
return
format_urls.add(format_url)
tbr = int_or_none(self._search_regex(
r'[/_](\d+)[kK][/_]', format_url, 'tbr', default=None))
if not height:
height = int_or_none(self._search_regex(
r'[/_](\d+)[pP][/_]', format_url, 'height', default=None))
if encrypted:
format_url = aes_decrypt_text(
video_url, title, 32).decode('utf-8')
formats.append({
'url': format_url,
'format_id': '%dp' % height if height else None,
'height': height,
'tbr': tbr,
})
flashvars = self._parse_json(
self._search_regex(
r'flashvars\s*=\s*({.+?});', webpage,
'flashvars', default='{}'),
display_id, fatal=False)
if flashvars:
title = flashvars.get('video_title')
thumbnail = flashvars.get('image_url')
duration = int_or_none(flashvars.get('video_duration'))
encrypted = flashvars.get('encrypted') is True
for key, value in flashvars.items():
mobj = re.search(r'quality_(\d+)[pP]', key)
if mobj:
extract_format(value, int(mobj.group(1)))
video_url = flashvars.get('video_url')
if video_url and determine_ext(video_url, None):
extract_format(video_url)
video_url = self._html_search_regex(
r'flashvars\.video_url\s*=\s*(["\'])(?P<url>http.+?)\1',
webpage, 'video url', default=None, group='url')
if video_url:
extract_format(compat_urllib_parse_unquote(video_url))
if not formats:
if 'title="This video is no longer available"' in webpage:
raise ExtractorError(
'Video %s is no longer available' % video_id, expected=True)
self._sort_formats(formats)
if not title:
title = self._html_search_regex(
r'<h1[^>]*>([^<]+)', webpage, 'title')
return webpage, {
'id': video_id, 'id': video_id,
'title': video_title, 'display_id': display_id,
'title': strip_or_none(title),
'thumbnail': thumbnail,
'duration': duration,
'age_limit': 18,
'formats': formats, 'formats': formats,
'age_limit': age_limit,
'thumbnail': flashvars.get('image_url')
} }
def _real_extract(self, url):
webpage, info = self._extract_info(url)
info['view_count'] = str_to_int(self._search_regex(
r'<b>([\d,.]+)</b> Views?', webpage, 'view count', fatal=False))
return info

View File

@@ -4,6 +4,7 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse
from ..utils import ( from ..utils import (
get_element_by_id, get_element_by_id,
clean_html, clean_html,
@@ -242,8 +243,9 @@ class KuwoSingerIE(InfoExtractor):
query={'artistId': artist_id, 'pn': page_num, 'rn': self.PAGE_SIZE}) query={'artistId': artist_id, 'pn': page_num, 'rn': self.PAGE_SIZE})
return [ return [
self.url_result(song_url, 'Kuwo') for song_url in re.findall( self.url_result(compat_urlparse.urljoin(url, song_url), 'Kuwo')
r'<div[^>]+class="name"><a[^>]+href="(http://www\.kuwo\.cn/yinyue/\d+)', for song_url in re.findall(
r'<div[^>]+class="name"><a[^>]+href="(/yinyue/\d+)',
webpage) webpage)
] ]

View File

@@ -4,7 +4,10 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse from ..compat import (
compat_str,
compat_urlparse,
)
from ..utils import ( from ..utils import (
determine_ext, determine_ext,
ExtractorError, ExtractorError,
@@ -96,7 +99,7 @@ class LifeNewsIE(InfoExtractor):
r'<video[^>]+><source[^>]+src=["\'](.+?)["\']', webpage) r'<video[^>]+><source[^>]+src=["\'](.+?)["\']', webpage)
iframe_links = re.findall( iframe_links = re.findall(
r'<iframe[^>]+src=["\']((?:https?:)?//embed\.life\.ru/embed/.+?)["\']', r'<iframe[^>]+src=["\']((?:https?:)?//embed\.life\.ru/(?:embed|video)/.+?)["\']',
webpage) webpage)
if not video_urls and not iframe_links: if not video_urls and not iframe_links:
@@ -164,9 +167,9 @@ class LifeNewsIE(InfoExtractor):
class LifeEmbedIE(InfoExtractor): class LifeEmbedIE(InfoExtractor):
IE_NAME = 'life:embed' IE_NAME = 'life:embed'
_VALID_URL = r'https?://embed\.life\.ru/embed/(?P<id>[\da-f]{32})' _VALID_URL = r'https?://embed\.life\.ru/(?:embed|video)/(?P<id>[\da-f]{32})'
_TEST = { _TESTS = [{
'url': 'http://embed.life.ru/embed/e50c2dec2867350528e2574c899b8291', 'url': 'http://embed.life.ru/embed/e50c2dec2867350528e2574c899b8291',
'md5': 'b889715c9e49cb1981281d0e5458fbbe', 'md5': 'b889715c9e49cb1981281d0e5458fbbe',
'info_dict': { 'info_dict': {
@@ -175,30 +178,57 @@ class LifeEmbedIE(InfoExtractor):
'title': 'e50c2dec2867350528e2574c899b8291', 'title': 'e50c2dec2867350528e2574c899b8291',
'thumbnail': 're:http://.*\.jpg', 'thumbnail': 're:http://.*\.jpg',
} }
} }, {
# with 1080p
'url': 'https://embed.life.ru/video/e50c2dec2867350528e2574c899b8291',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
thumbnail = None
formats = [] formats = []
for video_url in re.findall(r'"file"\s*:\s*"([^"]+)', webpage):
video_url = compat_urlparse.urljoin(url, video_url) def extract_m3u8(manifest_url):
ext = determine_ext(video_url) formats.extend(self._extract_m3u8_formats(
if ext == 'm3u8': manifest_url, video_id, 'mp4',
formats.extend(self._extract_m3u8_formats( entry_protocol='m3u8_native', m3u8_id='m3u8'))
video_url, video_id, 'mp4',
entry_protocol='m3u8_native', m3u8_id='m3u8')) def extract_original(original_url):
else: formats.append({
formats.append({ 'url': original_url,
'url': video_url, 'format_id': determine_ext(original_url, None),
'format_id': ext, 'preference': 1,
'preference': 1, })
})
playlist = self._parse_json(
self._search_regex(
r'options\s*=\s*({.+?});', webpage, 'options', default='{}'),
video_id).get('playlist', {})
if playlist:
master = playlist.get('master')
if isinstance(master, compat_str) and determine_ext(master) == 'm3u8':
extract_m3u8(compat_urlparse.urljoin(url, master))
original = playlist.get('original')
if isinstance(original, compat_str):
extract_original(original)
thumbnail = playlist.get('image')
# Old rendition fallback
if not formats:
for video_url in re.findall(r'"file"\s*:\s*"([^"]+)', webpage):
video_url = compat_urlparse.urljoin(url, video_url)
if determine_ext(video_url) == 'm3u8':
extract_m3u8(video_url)
else:
extract_original(video_url)
self._sort_formats(formats) self._sort_formats(formats)
thumbnail = self._search_regex( thumbnail = thumbnail or self._search_regex(
r'"image"\s*:\s*"([^"]+)', webpage, 'thumbnail', default=None) r'"image"\s*:\s*"([^"]+)', webpage, 'thumbnail', default=None)
return { return {

View File

@@ -37,11 +37,12 @@ class LimelightBaseIE(InfoExtractor):
for stream in streams: for stream in streams:
stream_url = stream.get('url') stream_url = stream.get('url')
if not stream_url: if not stream_url or stream.get('drmProtected'):
continue continue
if '.f4m' in stream_url: ext = determine_ext(stream_url)
if ext == 'f4m':
formats.extend(self._extract_f4m_formats( formats.extend(self._extract_f4m_formats(
stream_url, video_id, fatal=False)) stream_url, video_id, f4m_id='hds', fatal=False))
else: else:
fmt = { fmt = {
'url': stream_url, 'url': stream_url,
@@ -50,13 +51,19 @@ class LimelightBaseIE(InfoExtractor):
'fps': float_or_none(stream.get('videoFrameRate')), 'fps': float_or_none(stream.get('videoFrameRate')),
'width': int_or_none(stream.get('videoWidthInPixels')), 'width': int_or_none(stream.get('videoWidthInPixels')),
'height': int_or_none(stream.get('videoHeightInPixels')), 'height': int_or_none(stream.get('videoHeightInPixels')),
'ext': determine_ext(stream_url) 'ext': ext,
} }
rtmp = re.search(r'^(?P<url>rtmpe?://[^/]+/(?P<app>.+))/(?P<playpath>mp4:.+)$', stream_url) rtmp = re.search(r'^(?P<url>rtmpe?://(?P<host>[^/]+)/(?P<app>.+))/(?P<playpath>mp4:.+)$', stream_url)
if rtmp: if rtmp:
format_id = 'rtmp' format_id = 'rtmp'
if stream.get('videoBitRate'): if stream.get('videoBitRate'):
format_id += '-%d' % int_or_none(stream['videoBitRate']) format_id += '-%d' % int_or_none(stream['videoBitRate'])
http_fmt = fmt.copy()
http_fmt.update({
'url': 'http://%s/%s' % (rtmp.group('host').replace('csl.', 'cpl.'), rtmp.group('playpath')[4:]),
'format_id': format_id.replace('rtmp', 'http'),
})
formats.append(http_fmt)
fmt.update({ fmt.update({
'url': rtmp.group('url'), 'url': rtmp.group('url'),
'play_path': rtmp.group('playpath'), 'play_path': rtmp.group('playpath'),
@@ -68,18 +75,23 @@ class LimelightBaseIE(InfoExtractor):
for mobile_url in mobile_urls: for mobile_url in mobile_urls:
media_url = mobile_url.get('mobileUrl') media_url = mobile_url.get('mobileUrl')
if not media_url:
continue
format_id = mobile_url.get('targetMediaPlatform') format_id = mobile_url.get('targetMediaPlatform')
if determine_ext(media_url) == 'm3u8': if not media_url or format_id == 'Widevine':
continue
ext = determine_ext(media_url)
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats( formats.extend(self._extract_m3u8_formats(
media_url, video_id, 'mp4', 'm3u8_native', media_url, video_id, 'mp4', 'm3u8_native',
m3u8_id=format_id, fatal=False)) m3u8_id=format_id, fatal=False))
elif ext == 'f4m':
formats.extend(self._extract_f4m_formats(
stream_url, video_id, f4m_id=format_id, fatal=False))
else: else:
formats.append({ formats.append({
'url': media_url, 'url': media_url,
'format_id': format_id, 'format_id': format_id,
'preference': -1, 'preference': -1,
'ext': ext,
}) })
self._sort_formats(formats) self._sort_formats(formats)
@@ -145,7 +157,7 @@ class LimelightMediaIE(LimelightBaseIE):
'url': 'http://link.videoplatform.limelight.com/media/?mediaId=3ffd040b522b4485b6d84effc750cd86', 'url': 'http://link.videoplatform.limelight.com/media/?mediaId=3ffd040b522b4485b6d84effc750cd86',
'info_dict': { 'info_dict': {
'id': '3ffd040b522b4485b6d84effc750cd86', 'id': '3ffd040b522b4485b6d84effc750cd86',
'ext': 'flv', 'ext': 'mp4',
'title': 'HaP and the HB Prince Trailer', 'title': 'HaP and the HB Prince Trailer',
'description': 'md5:8005b944181778e313d95c1237ddb640', 'description': 'md5:8005b944181778e313d95c1237ddb640',
'thumbnail': 're:^https?://.*\.jpeg$', 'thumbnail': 're:^https?://.*\.jpeg$',
@@ -154,27 +166,23 @@ class LimelightMediaIE(LimelightBaseIE):
'upload_date': '20090604', 'upload_date': '20090604',
}, },
'params': { 'params': {
# rtmp download # m3u8 download
'skip_download': True, 'skip_download': True,
}, },
}, { }, {
# video with subtitles # video with subtitles
'url': 'limelight:media:a3e00274d4564ec4a9b29b9466432335', 'url': 'limelight:media:a3e00274d4564ec4a9b29b9466432335',
'md5': '2fa3bad9ac321e23860ca23bc2c69e3d',
'info_dict': { 'info_dict': {
'id': 'a3e00274d4564ec4a9b29b9466432335', 'id': 'a3e00274d4564ec4a9b29b9466432335',
'ext': 'flv', 'ext': 'mp4',
'title': '3Play Media Overview Video', 'title': '3Play Media Overview Video',
'description': '',
'thumbnail': 're:^https?://.*\.jpeg$', 'thumbnail': 're:^https?://.*\.jpeg$',
'duration': 78.101, 'duration': 78.101,
'timestamp': 1338929955, 'timestamp': 1338929955,
'upload_date': '20120605', 'upload_date': '20120605',
'subtitles': 'mincount:9', 'subtitles': 'mincount:9',
}, },
'params': {
# rtmp download
'skip_download': True,
},
}, { }, {
'url': 'https://assets.delvenetworks.com/player/loader.swf?mediaId=8018a574f08d416e95ceaccae4ba0452', 'url': 'https://assets.delvenetworks.com/player/loader.swf?mediaId=8018a574f08d416e95ceaccae4ba0452',
'only_matching': True, 'only_matching': True,

View File

@@ -9,7 +9,7 @@ class MGTVIE(InfoExtractor):
_VALID_URL = r'https?://www\.mgtv\.com/v/(?:[^/]+/)*(?P<id>\d+)\.html' _VALID_URL = r'https?://www\.mgtv\.com/v/(?:[^/]+/)*(?P<id>\d+)\.html'
IE_DESC = '芒果TV' IE_DESC = '芒果TV'
_TEST = { _TESTS = [{
'url': 'http://www.mgtv.com/v/1/290525/f/3116640.html', 'url': 'http://www.mgtv.com/v/1/290525/f/3116640.html',
'md5': '1bdadcf760a0b90946ca68ee9a2db41a', 'md5': '1bdadcf760a0b90946ca68ee9a2db41a',
'info_dict': { 'info_dict': {
@@ -20,7 +20,11 @@ class MGTVIE(InfoExtractor):
'duration': 7461, 'duration': 7461,
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
}, },
} }, {
# no tbr extracted from stream_url
'url': 'http://www.mgtv.com/v/1/1/f/3324755.html',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
@@ -41,7 +45,8 @@ class MGTVIE(InfoExtractor):
def extract_format(stream_url, format_id, idx, query={}): def extract_format(stream_url, format_id, idx, query={}):
format_info = self._download_json( format_info = self._download_json(
stream_url, video_id, stream_url, video_id,
note='Download video info for format %s' % format_id or '#%d' % idx, query=query) note='Download video info for format %s' % (format_id or '#%d' % idx),
query=query)
return { return {
'format_id': format_id, 'format_id': format_id,
'url': format_info['info'], 'url': format_info['info'],

View File

@@ -1,53 +1,56 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import os from ..utils import (
import re int_or_none,
str_to_int,
from .common import InfoExtractor unified_strdate,
from ..compat import (
compat_urllib_parse_unquote,
compat_urllib_parse_urlparse,
) )
from ..utils import sanitized_Request from .keezmovies import KeezMoviesIE
class MofosexIE(InfoExtractor): class MofosexIE(KeezMoviesIE):
_VALID_URL = r'https?://(?:www\.)?(?P<url>mofosex\.com/videos/(?P<id>[0-9]+)/.*?\.html)' _VALID_URL = r'https?://(?:www\.)?mofosex\.com/videos/(?P<id>\d+)/(?P<display_id>[^/?#&.]+)\.html'
_TEST = { _TESTS = [{
'url': 'http://www.mofosex.com/videos/5018/japanese-teen-music-video.html', 'url': 'http://www.mofosex.com/videos/318131/amateur-teen-playing-and-masturbating-318131.html',
'md5': '1b2eb47ac33cc75d4a80e3026b613c5a', 'md5': '39a15853632b7b2e5679f92f69b78e91',
'info_dict': { 'info_dict': {
'id': '5018', 'id': '318131',
'display_id': 'amateur-teen-playing-and-masturbating-318131',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Japanese Teen Music Video', 'title': 'amateur teen playing and masturbating',
'thumbnail': 're:^https?://.*\.jpg$',
'upload_date': '20121114',
'view_count': int,
'like_count': int,
'dislike_count': int,
'age_limit': 18, 'age_limit': 18,
} }
} }, {
# This video is no longer available
'url': 'http://www.mofosex.com/videos/5018/japanese-teen-music-video.html',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) webpage, info = self._extract_info(url)
video_id = mobj.group('id')
url = 'http://www.' + mobj.group('url')
req = sanitized_Request(url) view_count = str_to_int(self._search_regex(
req.add_header('Cookie', 'age_verified=1') r'VIEWS:</span>\s*([\d,.]+)', webpage, 'view count', fatal=False))
webpage = self._download_webpage(req, video_id) like_count = int_or_none(self._search_regex(
r'id=["\']amountLikes["\'][^>]*>(\d+)', webpage,
'like count', fatal=False))
dislike_count = int_or_none(self._search_regex(
r'id=["\']amountDislikes["\'][^>]*>(\d+)', webpage,
'like count', fatal=False))
upload_date = unified_strdate(self._html_search_regex(
r'Added:</span>([^<]+)', webpage, 'upload date', fatal=False))
video_title = self._html_search_regex(r'<h1>(.+?)<', webpage, 'title') info.update({
video_url = compat_urllib_parse_unquote(self._html_search_regex(r'flashvars.video_url = \'([^\']+)', webpage, 'video_url')) 'view_count': view_count,
path = compat_urllib_parse_urlparse(video_url).path 'like_count': like_count,
extension = os.path.splitext(path)[1][1:] 'dislike_count': dislike_count,
format = path.split('/')[5].split('_')[:2] 'upload_date': upload_date,
format = '-'.join(format) 'thumbnail': self._og_search_thumbnail(webpage),
})
age_limit = self._rta_search(webpage) return info
return {
'id': video_id,
'title': video_title,
'url': video_url,
'ext': extension,
'format': format,
'format_id': format,
'age_limit': age_limit,
}

View File

@@ -16,6 +16,7 @@ from ..utils import (
HEADRequest, HEADRequest,
sanitized_Request, sanitized_Request,
strip_or_none, strip_or_none,
timeconvert,
unescapeHTML, unescapeHTML,
url_basename, url_basename,
RegexNotFoundError, RegexNotFoundError,
@@ -36,13 +37,13 @@ class MTVServicesInfoExtractor(InfoExtractor):
return uri.split(':')[-1] return uri.split(':')[-1]
# This was originally implemented for ComedyCentral, but it also works here # This was originally implemented for ComedyCentral, but it also works here
@staticmethod @classmethod
def _transform_rtmp_url(rtmp_video_url): def _transform_rtmp_url(cls, rtmp_video_url):
m = re.match(r'^rtmpe?://.*?/(?P<finalid>gsp\..+?/.*)$', rtmp_video_url) m = re.match(r'^rtmpe?://.*?/(?P<finalid>gsp\..+?/.*)$', rtmp_video_url)
if not m: if not m:
return rtmp_video_url return {'rtmp': rtmp_video_url}
base = 'http://viacommtvstrmfs.fplive.net/' base = 'http://viacommtvstrmfs.fplive.net/'
return base + m.group('finalid') return {'http': base + m.group('finalid')}
def _get_feed_url(self, uri): def _get_feed_url(self, uri):
return self._FEED_URL return self._FEED_URL
@@ -86,14 +87,14 @@ class MTVServicesInfoExtractor(InfoExtractor):
rtmp_video_url = rendition.find('./src').text rtmp_video_url = rendition.find('./src').text
if rtmp_video_url.endswith('siteunavail.png'): if rtmp_video_url.endswith('siteunavail.png'):
continue continue
new_url = self._transform_rtmp_url(rtmp_video_url) new_urls = self._transform_rtmp_url(rtmp_video_url)
formats.append({ formats.extend([{
'ext': 'flv' if new_url.startswith('rtmp') else ext, 'ext': 'flv' if new_url.startswith('rtmp') else ext,
'url': new_url, 'url': new_url,
'format_id': rendition.get('bitrate'), 'format_id': '-'.join(filter(None, [kind, rendition.get('bitrate')])),
'width': int(rendition.get('width')), 'width': int(rendition.get('width')),
'height': int(rendition.get('height')), 'height': int(rendition.get('height')),
}) } for kind, new_url in new_urls.items()])
except (KeyError, TypeError): except (KeyError, TypeError):
raise ExtractorError('Invalid rendition field.') raise ExtractorError('Invalid rendition field.')
self._sort_formats(formats) self._sort_formats(formats)
@@ -136,6 +137,8 @@ class MTVServicesInfoExtractor(InfoExtractor):
description = strip_or_none(xpath_text(itemdoc, 'description')) description = strip_or_none(xpath_text(itemdoc, 'description'))
timestamp = timeconvert(xpath_text(itemdoc, 'pubDate'))
title_el = None title_el = None
if title_el is None: if title_el is None:
title_el = find_xpath_attr( title_el = find_xpath_attr(
@@ -168,6 +171,7 @@ class MTVServicesInfoExtractor(InfoExtractor):
'thumbnail': self._get_thumbnail_url(uri, itemdoc), 'thumbnail': self._get_thumbnail_url(uri, itemdoc),
'description': description, 'description': description,
'duration': float_or_none(content_el.attrib.get('duration')), 'duration': float_or_none(content_el.attrib.get('duration')),
'timestamp': timestamp,
} }
def _get_feed_query(self, uri): def _get_feed_query(self, uri):
@@ -186,8 +190,13 @@ class MTVServicesInfoExtractor(InfoExtractor):
idoc = self._download_xml( idoc = self._download_xml(
url, video_id, url, video_id,
'Downloading info', transform_source=fix_xml_ampersands) 'Downloading info', transform_source=fix_xml_ampersands)
title = xpath_text(idoc, './channel/title')
description = xpath_text(idoc, './channel/description')
return self.playlist_result( return self.playlist_result(
[self._get_video_info(item) for item in idoc.findall('.//item')]) [self._get_video_info(item) for item in idoc.findall('.//item')],
playlist_title=title, playlist_description=description)
def _extract_mgid(self, webpage): def _extract_mgid(self, webpage):
try: try:
@@ -233,6 +242,8 @@ class MTVServicesEmbeddedIE(MTVServicesInfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Peter Dinklage Sums Up \'Game Of Thrones\' In 45 Seconds', 'title': 'Peter Dinklage Sums Up \'Game Of Thrones\' In 45 Seconds',
'description': '"Sexy sexy sexy, stabby stabby stabby, beautiful language," says Peter Dinklage as he tries summarizing "Game of Thrones" in under a minute.', 'description': '"Sexy sexy sexy, stabby stabby stabby, beautiful language," says Peter Dinklage as he tries summarizing "Game of Thrones" in under a minute.',
'timestamp': 1400126400,
'upload_date': '20140515',
}, },
} }
@@ -275,6 +286,8 @@ class MTVIE(MTVServicesInfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Taylor Swift - "Ours (VH1 Storytellers)"', 'title': 'Taylor Swift - "Ours (VH1 Storytellers)"',
'description': 'Album: Taylor Swift performs "Ours" for VH1 Storytellers at Harvey Mudd College.', 'description': 'Album: Taylor Swift performs "Ours" for VH1 Storytellers at Harvey Mudd College.',
'timestamp': 1352610000,
'upload_date': '20121111',
}, },
}, },
] ]
@@ -301,20 +314,6 @@ class MTVIE(MTVServicesInfoExtractor):
return self._get_videos_info(uri) return self._get_videos_info(uri)
class MTVIggyIE(MTVServicesInfoExtractor):
IE_NAME = 'mtviggy.com'
_VALID_URL = r'https?://www\.mtviggy\.com/videos/.+'
_TEST = {
'url': 'http://www.mtviggy.com/videos/arcade-fire-behind-the-scenes-at-the-biggest-music-experiment-yet/',
'info_dict': {
'id': '984696',
'ext': 'mp4',
'title': 'Arcade Fire: Behind the Scenes at the Biggest Music Experiment Yet',
}
}
_FEED_URL = 'http://all.mtvworldverticals.com/feed-xml/'
class MTVDEIE(MTVServicesInfoExtractor): class MTVDEIE(MTVServicesInfoExtractor):
IE_NAME = 'mtv.de' IE_NAME = 'mtv.de'
_VALID_URL = r'https?://(?:www\.)?mtv\.de/(?:artists|shows|news)/(?:[^/]+/)*(?P<id>\d+)-[^/#?]+/*(?:[#?].*)?$' _VALID_URL = r'https?://(?:www\.)?mtv\.de/(?:artists|shows|news)/(?:[^/]+/)*(?P<id>\d+)-[^/#?]+/*(?:[#?].*)?$'
@@ -322,7 +321,7 @@ class MTVDEIE(MTVServicesInfoExtractor):
'url': 'http://www.mtv.de/artists/10571-cro/videos/61131-traum', 'url': 'http://www.mtv.de/artists/10571-cro/videos/61131-traum',
'info_dict': { 'info_dict': {
'id': 'music_video-a50bc5f0b3aa4b3190aa', 'id': 'music_video-a50bc5f0b3aa4b3190aa',
'ext': 'mp4', 'ext': 'flv',
'title': 'MusicVideo_cro-traum', 'title': 'MusicVideo_cro-traum',
'description': 'Cro - Traum', 'description': 'Cro - Traum',
}, },
@@ -330,20 +329,21 @@ class MTVDEIE(MTVServicesInfoExtractor):
# rtmp download # rtmp download
'skip_download': True, 'skip_download': True,
}, },
'skip': 'Blocked at Travis CI',
}, { }, {
# mediagen URL without query (e.g. http://videos.mtvnn.com/mediagen/e865da714c166d18d6f80893195fcb97) # mediagen URL without query (e.g. http://videos.mtvnn.com/mediagen/e865da714c166d18d6f80893195fcb97)
'url': 'http://www.mtv.de/shows/933-teen-mom-2/staffeln/5353/folgen/63565-enthullungen', 'url': 'http://www.mtv.de/shows/933-teen-mom-2/staffeln/5353/folgen/63565-enthullungen',
'info_dict': { 'info_dict': {
'id': 'local_playlist-f5ae778b9832cc837189', 'id': 'local_playlist-f5ae778b9832cc837189',
'ext': 'mp4', 'ext': 'flv',
'title': 'Episode_teen-mom-2_shows_season-5_episode-1_full-episode_part1', 'title': 'Episode_teen-mom-2_shows_season-5_episode-1_full-episode_part1',
}, },
'params': { 'params': {
# rtmp download # rtmp download
'skip_download': True, 'skip_download': True,
}, },
'skip': 'Blocked at Travis CI',
}, { }, {
# single video in pagePlaylist with different id
'url': 'http://www.mtv.de/news/77491-mtv-movies-spotlight-pixels-teil-3', 'url': 'http://www.mtv.de/news/77491-mtv-movies-spotlight-pixels-teil-3',
'info_dict': { 'info_dict': {
'id': 'local_playlist-4e760566473c4c8c5344', 'id': 'local_playlist-4e760566473c4c8c5344',
@@ -355,6 +355,7 @@ class MTVDEIE(MTVServicesInfoExtractor):
# rtmp download # rtmp download
'skip_download': True, 'skip_download': True,
}, },
'skip': 'Das Video kann zur Zeit nicht abgespielt werden.',
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -367,11 +368,14 @@ class MTVDEIE(MTVServicesInfoExtractor):
r'window\.pagePlaylist\s*=\s*(\[.+?\]);\n', webpage, 'page playlist'), r'window\.pagePlaylist\s*=\s*(\[.+?\]);\n', webpage, 'page playlist'),
video_id) video_id)
def _mrss_url(item):
return item['mrss'] + item.get('mrssvars', '')
# news pages contain single video in playlist with different id # news pages contain single video in playlist with different id
if len(playlist) == 1: if len(playlist) == 1:
return self._get_videos_info_from_url(playlist[0]['mrss'], video_id) return self._get_videos_info_from_url(_mrss_url(playlist[0]), video_id)
for item in playlist: for item in playlist:
item_id = item.get('id') item_id = item.get('id')
if item_id and compat_str(item_id) == video_id: if item_id and compat_str(item_id) == video_id:
return self._get_videos_info_from_url(item['mrss'], video_id) return self._get_videos_info_from_url(_mrss_url(item), video_id)

View File

@@ -36,7 +36,7 @@ class MuenchenTVIE(InfoExtractor):
title = self._live_title(self._og_search_title(webpage)) title = self._live_title(self._og_search_title(webpage))
data_js = self._search_regex( data_js = self._search_regex(
r'(?s)\nplaylist:\s*(\[.*?}\]),related:', r'(?s)\nplaylist:\s*(\[.*?}\]),',
webpage, 'playlist configuration') webpage, 'playlist configuration')
data_json = js_to_json(data_js) data_json = js_to_json(data_js)
data = json.loads(data_json)[0] data = json.loads(data_json)[0]

View File

@@ -1,16 +1,19 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from .theplatform import ThePlatformIE from .adobepass import AdobePassIE
from ..utils import ( from ..utils import (
smuggle_url, smuggle_url,
url_basename, url_basename,
update_url_query, update_url_query,
get_element_by_class,
) )
class NationalGeographicIE(InfoExtractor): class NationalGeographicVideoIE(InfoExtractor):
IE_NAME = 'natgeo' IE_NAME = 'natgeo:video'
_VALID_URL = r'https?://video\.nationalgeographic\.com/.*?' _VALID_URL = r'https?://video\.nationalgeographic\.com/.*?'
_TESTS = [ _TESTS = [
@@ -62,16 +65,16 @@ class NationalGeographicIE(InfoExtractor):
} }
class NationalGeographicChannelIE(ThePlatformIE): class NationalGeographicIE(AdobePassIE):
IE_NAME = 'natgeo:channel' IE_NAME = 'natgeo'
_VALID_URL = r'https?://channel\.nationalgeographic\.com/(?:wild/)?[^/]+/videos/(?P<id>[^/?]+)' _VALID_URL = r'https?://channel\.nationalgeographic\.com/(?:wild/)?[^/]+/(?:videos|episodes)/(?P<id>[^/?]+)'
_TESTS = [ _TESTS = [
{ {
'url': 'http://channel.nationalgeographic.com/the-story-of-god-with-morgan-freeman/videos/uncovering-a-universal-knowledge/', 'url': 'http://channel.nationalgeographic.com/the-story-of-god-with-morgan-freeman/videos/uncovering-a-universal-knowledge/',
'md5': '518c9aa655686cf81493af5cc21e2a04', 'md5': '518c9aa655686cf81493af5cc21e2a04',
'info_dict': { 'info_dict': {
'id': 'nB5vIAfmyllm', 'id': 'vKInpacll2pC',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Uncovering a Universal Knowledge', 'title': 'Uncovering a Universal Knowledge',
'description': 'md5:1a89148475bf931b3661fcd6ddb2ae3a', 'description': 'md5:1a89148475bf931b3661fcd6ddb2ae3a',
@@ -85,7 +88,7 @@ class NationalGeographicChannelIE(ThePlatformIE):
'url': 'http://channel.nationalgeographic.com/wild/destination-wild/videos/the-stunning-red-bird-of-paradise/', 'url': 'http://channel.nationalgeographic.com/wild/destination-wild/videos/the-stunning-red-bird-of-paradise/',
'md5': 'c4912f656b4cbe58f3e000c489360989', 'md5': 'c4912f656b4cbe58f3e000c489360989',
'info_dict': { 'info_dict': {
'id': '3TmMv9OvGwIR', 'id': 'Pok5lWCkiEFA',
'ext': 'mp4', 'ext': 'mp4',
'title': 'The Stunning Red Bird of Paradise', 'title': 'The Stunning Red Bird of Paradise',
'description': 'md5:7bc8cd1da29686be4d17ad1230f0140c', 'description': 'md5:7bc8cd1da29686be4d17ad1230f0140c',
@@ -95,6 +98,10 @@ class NationalGeographicChannelIE(ThePlatformIE):
}, },
'add_ie': ['ThePlatform'], 'add_ie': ['ThePlatform'],
}, },
{
'url': 'http://channel.nationalgeographic.com/the-story-of-god-with-morgan-freeman/episodes/the-power-of-miracles/',
'only_matching': True,
}
] ]
def _real_extract(self, url): def _real_extract(self, url):
@@ -112,7 +119,7 @@ class NationalGeographicChannelIE(ThePlatformIE):
auth_resource_id = self._search_regex( auth_resource_id = self._search_regex(
r"video_auth_resourceId\s*=\s*'([^']+)'", r"video_auth_resourceId\s*=\s*'([^']+)'",
webpage, 'auth resource id') webpage, 'auth resource id')
query['auth'] = self._extract_mvpd_auth(url, display_id, 'natgeo', auth_resource_id) or '' query['auth'] = self._extract_mvpd_auth(url, display_id, 'natgeo', auth_resource_id)
return { return {
'_type': 'url_transparent', '_type': 'url_transparent',
@@ -122,3 +129,40 @@ class NationalGeographicChannelIE(ThePlatformIE):
{'force_smil_url': True}), {'force_smil_url': True}),
'display_id': display_id, 'display_id': display_id,
} }
class NationalGeographicEpisodeGuideIE(InfoExtractor):
IE_NAME = 'natgeo:episodeguide'
_VALID_URL = r'https?://channel\.nationalgeographic\.com/(?:wild/)?(?P<id>[^/]+)/episode-guide'
_TESTS = [
{
'url': 'http://channel.nationalgeographic.com/the-story-of-god-with-morgan-freeman/episode-guide/',
'info_dict': {
'id': 'the-story-of-god-with-morgan-freeman-season-1',
'title': 'The Story of God with Morgan Freeman - Season 1',
},
'playlist_mincount': 6,
},
{
'url': 'http://channel.nationalgeographic.com/underworld-inc/episode-guide/?s=2',
'info_dict': {
'id': 'underworld-inc-season-2',
'title': 'Underworld, Inc. - Season 2',
},
'playlist_mincount': 7,
},
]
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
show = get_element_by_class('show', webpage)
selected_season = self._search_regex(
r'<div[^>]+class="select-seasons[^"]*".*?<a[^>]*>(.*?)</a>',
webpage, 'selected season')
entries = [
self.url_result(self._proto_relative_url(entry_url), 'NationalGeographic')
for entry_url in re.findall('(?s)<div[^>]+class="col-inner"[^>]*?>.*?<a[^>]+href="([^"]+)"', webpage)]
return self.playlist_result(
entries, '%s-%s' % (display_id, selected_season.lower().replace(' ', '-')),
'%s - %s' % (show, selected_season))

View File

@@ -4,12 +4,10 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import (
compat_urllib_parse_urlencode,
compat_urlparse,
)
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none,
update_url_query,
) )
@@ -51,48 +49,74 @@ class NaverIE(InfoExtractor):
if error: if error:
raise ExtractorError(error, expected=True) raise ExtractorError(error, expected=True)
raise ExtractorError('couldn\'t extract vid and key') raise ExtractorError('couldn\'t extract vid and key')
vid = m_id.group(1) video_data = self._download_json(
key = m_id.group(2) 'http://play.rmcnmv.naver.com/vod/play/v2.0/' + m_id.group(1),
query = compat_urllib_parse_urlencode({'vid': vid, 'inKey': key, }) video_id, query={
query_urls = compat_urllib_parse_urlencode({ 'key': m_id.group(2),
'masterVid': vid, })
'protocol': 'p2p', meta = video_data['meta']
'inKey': key, title = meta['subject']
})
info = self._download_xml(
'http://serviceapi.rmcnmv.naver.com/flash/videoInfo.nhn?' + query,
video_id, 'Downloading video info')
urls = self._download_xml(
'http://serviceapi.rmcnmv.naver.com/flash/playableEncodingOption.nhn?' + query_urls,
video_id, 'Downloading video formats info')
formats = [] formats = []
for format_el in urls.findall('EncodingOptions/EncodingOption'):
domain = format_el.find('Domain').text def extract_formats(streams, stream_type, query={}):
uri = format_el.find('uri').text for stream in streams:
f = { stream_url = stream.get('source')
'url': compat_urlparse.urljoin(domain, uri), if not stream_url:
'ext': 'mp4', continue
'width': int(format_el.find('width').text), stream_url = update_url_query(stream_url, query)
'height': int(format_el.find('height').text), encoding_option = stream.get('encodingOption', {})
} bitrate = stream.get('bitrate', {})
if domain.startswith('rtmp'): formats.append({
# urlparse does not support custom schemes 'format_id': '%s_%s' % (stream.get('type') or stream_type, encoding_option.get('id') or encoding_option.get('name')),
# https://bugs.python.org/issue18828 'url': stream_url,
f.update({ 'width': int_or_none(encoding_option.get('width')),
'url': domain + uri, 'height': int_or_none(encoding_option.get('height')),
'ext': 'flv', 'vbr': int_or_none(bitrate.get('video')),
'rtmp_protocol': '1', # rtmpt 'abr': int_or_none(bitrate.get('audio')),
'filesize': int_or_none(stream.get('size')),
'protocol': 'm3u8_native' if stream_type == 'HLS' else None,
}) })
formats.append(f)
extract_formats(video_data.get('videos', {}).get('list', []), 'H264')
for stream_set in video_data.get('streams', []):
query = {}
for param in stream_set.get('keys', []):
query[param['name']] = param['value']
stream_type = stream_set.get('type')
videos = stream_set.get('videos')
if videos:
extract_formats(videos, stream_type, query)
elif stream_type == 'HLS':
stream_url = stream_set.get('source')
if not stream_url:
continue
formats.extend(self._extract_m3u8_formats(
update_url_query(stream_url, query), video_id,
'mp4', 'm3u8_native', m3u8_id=stream_type, fatal=False))
self._sort_formats(formats) self._sort_formats(formats)
subtitles = {}
for caption in video_data.get('captions', {}).get('list', []):
caption_url = caption.get('source')
if not caption_url:
continue
subtitles.setdefault(caption.get('language') or caption.get('locale'), []).append({
'url': caption_url,
})
upload_date = self._search_regex(
r'<span[^>]+class="date".*?(\d{4}\.\d{2}\.\d{2})',
webpage, 'upload date', fatal=False)
if upload_date:
upload_date = upload_date.replace('.', '')
return { return {
'id': video_id, 'id': video_id,
'title': info.find('Subject').text, 'title': title,
'formats': formats, 'formats': formats,
'subtitles': subtitles,
'description': self._og_search_description(webpage), 'description': self._og_search_description(webpage),
'thumbnail': self._og_search_thumbnail(webpage), 'thumbnail': meta.get('cover', {}).get('source') or self._og_search_thumbnail(webpage),
'upload_date': info.find('WriteDate').text.replace('.', ''), 'view_count': int_or_none(meta.get('count')),
'view_count': int(info.find('PlayCount').text), 'upload_date': upload_date,
} }

View File

@@ -1,30 +0,0 @@
# coding: utf-8
from __future__ import unicode_literals
from .mtv import MTVServicesInfoExtractor
from ..compat import compat_urllib_parse_urlencode
class NextMovieIE(MTVServicesInfoExtractor):
IE_NAME = 'nextmovie.com'
_VALID_URL = r'https?://(?:www\.)?nextmovie\.com/shows/[^/]+/\d{4}-\d{2}-\d{2}/(?P<id>[^/?#]+)'
_FEED_URL = 'http://lite.dextr.mtvi.com/service1/dispatch.htm'
_TESTS = [{
'url': 'http://www.nextmovie.com/shows/exclusives/2013-03-10/mgid:uma:videolist:nextmovie.com:1715019/',
'md5': '09a9199f2f11f10107d04fcb153218aa',
'info_dict': {
'id': '961726',
'ext': 'mp4',
'title': 'The Muppets\' Gravity',
},
}]
def _get_feed_query(self, uri):
return compat_urllib_parse_urlencode({
'feed': '1505',
'mgid': uri,
})
def _real_extract(self, url):
mgid = self._match_id(url)
return self._get_videos_info(mgid)

View File

@@ -7,6 +7,7 @@ from ..utils import update_url_query
class NickIE(MTVServicesInfoExtractor): class NickIE(MTVServicesInfoExtractor):
# None of videos on the website are still alive?
IE_NAME = 'nick.com' IE_NAME = 'nick.com'
_VALID_URL = r'https?://(?:www\.)?nick(?:jr)?\.com/(?:videos/clip|[^/]+/videos)/(?P<id>[^/?#.]+)' _VALID_URL = r'https?://(?:www\.)?nick(?:jr)?\.com/(?:videos/clip|[^/]+/videos)/(?P<id>[^/?#.]+)'
_FEED_URL = 'http://udat.mtvnservices.com/service1/dispatch.htm' _FEED_URL = 'http://udat.mtvnservices.com/service1/dispatch.htm'

View File

@@ -11,70 +11,64 @@ from ..utils import (
class NTVRuIE(InfoExtractor): class NTVRuIE(InfoExtractor):
IE_NAME = 'ntv.ru' IE_NAME = 'ntv.ru'
_VALID_URL = r'https?://(?:www\.)?ntv\.ru/(?P<id>.+)' _VALID_URL = r'https?://(?:www\.)?ntv\.ru/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TESTS = [ _TESTS = [{
{ 'url': 'http://www.ntv.ru/novosti/863142/',
'url': 'http://www.ntv.ru/novosti/863142/', 'md5': 'ba7ea172a91cb83eb734cad18c10e723',
'md5': 'ba7ea172a91cb83eb734cad18c10e723', 'info_dict': {
'info_dict': { 'id': '746000',
'id': '746000', 'ext': 'mp4',
'ext': 'mp4', 'title': 'Командующий Черноморским флотом провел переговоры в штабе ВМС Украины',
'title': 'Командующий Черноморским флотом провел переговоры в штабе ВМС Украины', 'description': 'Командующий Черноморским флотом провел переговоры в штабе ВМС Украины',
'description': 'Командующий Черноморским флотом провел переговоры в штабе ВМС Украины', 'thumbnail': 're:^http://.*\.jpg',
'thumbnail': 're:^http://.*\.jpg', 'duration': 136,
'duration': 136,
},
}, },
{ }, {
'url': 'http://www.ntv.ru/video/novosti/750370/', 'url': 'http://www.ntv.ru/video/novosti/750370/',
'md5': 'adecff79691b4d71e25220a191477124', 'md5': 'adecff79691b4d71e25220a191477124',
'info_dict': { 'info_dict': {
'id': '750370', 'id': '750370',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Родные пассажиров пропавшего Boeing не верят в трагический исход', 'title': 'Родные пассажиров пропавшего Boeing не верят в трагический исход',
'description': 'Родные пассажиров пропавшего Boeing не верят в трагический исход', 'description': 'Родные пассажиров пропавшего Boeing не верят в трагический исход',
'thumbnail': 're:^http://.*\.jpg', 'thumbnail': 're:^http://.*\.jpg',
'duration': 172, 'duration': 172,
},
}, },
{ }, {
'url': 'http://www.ntv.ru/peredacha/segodnya/m23700/o232416', 'url': 'http://www.ntv.ru/peredacha/segodnya/m23700/o232416',
'md5': '82dbd49b38e3af1d00df16acbeab260c', 'md5': '82dbd49b38e3af1d00df16acbeab260c',
'info_dict': { 'info_dict': {
'id': '747480', 'id': '747480',
'ext': 'mp4', 'ext': 'mp4',
'title': '«Сегодня». 21 марта 2014 года. 16:00', 'title': '«Сегодня». 21 марта 2014 года. 16:00',
'description': '«Сегодня». 21 марта 2014 года. 16:00', 'description': '«Сегодня». 21 марта 2014 года. 16:00',
'thumbnail': 're:^http://.*\.jpg', 'thumbnail': 're:^http://.*\.jpg',
'duration': 1496, 'duration': 1496,
},
}, },
{ }, {
'url': 'http://www.ntv.ru/kino/Koma_film', 'url': 'http://www.ntv.ru/kino/Koma_film',
'md5': 'f825770930937aa7e5aca0dc0d29319a', 'md5': 'f825770930937aa7e5aca0dc0d29319a',
'info_dict': { 'info_dict': {
'id': '1007609', 'id': '1007609',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Остросюжетный фильм «Кома»', 'title': 'Остросюжетный фильм «Кома»',
'description': 'Остросюжетный фильм «Кома»', 'description': 'Остросюжетный фильм «Кома»',
'thumbnail': 're:^http://.*\.jpg', 'thumbnail': 're:^http://.*\.jpg',
'duration': 5592, 'duration': 5592,
},
}, },
{ }, {
'url': 'http://www.ntv.ru/serial/Delo_vrachey/m31760/o233916/', 'url': 'http://www.ntv.ru/serial/Delo_vrachey/m31760/o233916/',
'md5': '9320cd0e23f3ea59c330dc744e06ff3b', 'md5': '9320cd0e23f3ea59c330dc744e06ff3b',
'info_dict': { 'info_dict': {
'id': '751482', 'id': '751482',
'ext': 'mp4', 'ext': 'mp4',
'title': '«Дело врачей»: «Деревце жизни»', 'title': '«Дело врачей»: «Деревце жизни»',
'description': '«Дело врачей»: «Деревце жизни»', 'description': '«Дело врачей»: «Деревце жизни»',
'thumbnail': 're:^http://.*\.jpg', 'thumbnail': 're:^http://.*\.jpg',
'duration': 2590, 'duration': 2590,
},
}, },
] }]
_VIDEO_ID_REGEXES = [ _VIDEO_ID_REGEXES = [
r'<meta property="og:url" content="http://www\.ntv\.ru/video/(\d+)', r'<meta property="og:url" content="http://www\.ntv\.ru/video/(\d+)',
@@ -87,11 +81,21 @@ class NTVRuIE(InfoExtractor):
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
video_id = self._html_search_regex(self._VIDEO_ID_REGEXES, webpage, 'video id') video_url = self._og_search_property(
('video', 'video:iframe'), webpage, default=None)
if video_url:
video_id = self._search_regex(
r'https?://(?:www\.)?ntv\.ru/video/(?:embed/)?(\d+)',
video_url, 'video id', default=None)
if not video_id:
video_id = self._html_search_regex(
self._VIDEO_ID_REGEXES, webpage, 'video id')
player = self._download_xml( player = self._download_xml(
'http://www.ntv.ru/vi%s/' % video_id, 'http://www.ntv.ru/vi%s/' % video_id,
video_id, 'Downloading video XML') video_id, 'Downloading video XML')
title = clean_html(xpath_text(player, './data/title', 'title', fatal=True)) title = clean_html(xpath_text(player, './data/title', 'title', fatal=True))
description = clean_html(xpath_text(player, './data/description', 'description')) description = clean_html(xpath_text(player, './data/description', 'description'))

View File

@@ -1,15 +1,14 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals, division
import re import math
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_chr from ..compat import compat_chr
from ..utils import ( from ..utils import (
decode_png,
determine_ext, determine_ext,
encode_base_n,
ExtractorError, ExtractorError,
mimetype2ext,
) )
@@ -41,60 +40,6 @@ class OpenloadIE(InfoExtractor):
'only_matching': True, 'only_matching': True,
}] }]
@staticmethod
def openload_level2_debase(m):
radix, num = int(m.group(1)) + 27, int(m.group(2))
return '"' + encode_base_n(num, radix) + '"'
@classmethod
def openload_level2(cls, txt):
# The function name is ǃ \u01c3
# Using escaped unicode literals does not work in Python 3.2
return re.sub(r'ǃ\((\d+),(\d+)\)', cls.openload_level2_debase, txt, re.UNICODE).replace('"+"', '')
# Openload uses a variant of aadecode
# openload_decode and related functions are originally written by
# vitas@matfyz.cz and released with public domain
# See https://github.com/rg3/youtube-dl/issues/8489
@classmethod
def openload_decode(cls, txt):
symbol_table = [
('_', '(゚Д゚) [゚Θ゚]'),
('a', '(゚Д゚) [゚ω゚ノ]'),
('b', '(゚Д゚) [゚Θ゚ノ]'),
('c', '(゚Д゚) [\'c\']'),
('d', '(゚Д゚) [゚ー゚ノ]'),
('e', '(゚Д゚) [゚Д゚ノ]'),
('f', '(゚Д゚) [1]'),
('o', '(゚Д゚) [\'o\']'),
('u', '(o゚ー゚o)'),
('c', '(゚Д゚) [\'c\']'),
('7', '((゚ー゚) + (o^_^o))'),
('6', '((o^_^o) +(o^_^o) +(c^_^o))'),
('5', '((゚ー゚) + (゚Θ゚))'),
('4', '(-~3)'),
('3', '(-~-~1)'),
('2', '(-~1)'),
('1', '(-~0)'),
('0', '((c^_^o)-(c^_^o))'),
]
delim = '(゚Д゚)[゚ε゚]+'
ret = ''
for aachar in txt.split(delim):
for val, pat in symbol_table:
aachar = aachar.replace(pat, val)
aachar = aachar.replace('+ ', '')
m = re.match(r'^\d+', aachar)
if m:
ret += compat_chr(int(m.group(0), 8))
else:
m = re.match(r'^u([\da-f]+)', aachar)
if m:
ret += compat_chr(int(m.group(1), 16))
return cls.openload_level2(ret)
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
@@ -102,29 +47,77 @@ class OpenloadIE(InfoExtractor):
if 'File not found' in webpage: if 'File not found' in webpage:
raise ExtractorError('File not found', expected=True) raise ExtractorError('File not found', expected=True)
code = self._search_regex( # The following extraction logic is proposed by @Belderak and @gdkchan
r'</video>\s*</div>\s*<script[^>]+>[^>]+</script>\s*<script[^>]+>([^<]+)</script>', # and declared to be used freely in youtube-dl
webpage, 'JS code') # See https://github.com/rg3/youtube-dl/issues/9706
decoded = self.openload_decode(code) numbers_js = self._download_webpage(
'https://openload.co/assets/js/obfuscator/n.js', video_id,
note='Downloading signature numbers')
signums = self._search_regex(
r'window\.signatureNumbers\s*=\s*[\'"](?P<data>[a-z]+)[\'"]',
numbers_js, 'signature numbers', group='data')
video_url = self._search_regex( linkimg_uri = self._search_regex(
r'return\s+"(https?://[^"]+)"', decoded, 'video URL') r'<img[^>]+id="linkimg"[^>]+src="([^"]+)"', webpage, 'link image')
linkimg = self._request_webpage(
linkimg_uri, video_id, note=False).read()
width, height, pixels = decode_png(linkimg)
output = ''
for y in range(height):
for x in range(width):
r, g, b = pixels[y][3 * x:3 * x + 3]
if r == 0 and g == 0 and b == 0:
break
else:
output += compat_chr(r)
output += compat_chr(g)
output += compat_chr(b)
img_str_length = len(output) // 200
img_str = [[0 for x in range(img_str_length)] for y in range(10)]
sig_str_length = len(signums) // 260
sig_str = [[0 for x in range(sig_str_length)] for y in range(10)]
for i in range(10):
for j in range(img_str_length):
begin = i * img_str_length * 20 + j * 20
img_str[i][j] = output[begin:begin + 20]
for j in range(sig_str_length):
begin = i * sig_str_length * 26 + j * 26
sig_str[i][j] = signums[begin:begin + 26]
parts = []
# TODO: find better names for str_, chr_ and sum_
str_ = ''
for i in [2, 3, 5, 7]:
str_ = ''
sum_ = float(99)
for j in range(len(sig_str[i])):
for chr_idx in range(len(img_str[i][j])):
if sum_ > float(122):
sum_ = float(98)
chr_ = compat_chr(int(math.floor(sum_)))
if sig_str[i][j][chr_idx] == chr_ and j >= len(str_):
sum_ += float(2.5)
str_ += img_str[i][j][chr_idx]
parts.append(str_.replace(',', ''))
video_url = 'https://openload.co/stream/%s~%s~%s~%s' % (parts[3], parts[1], parts[2], parts[0])
title = self._og_search_title(webpage, default=None) or self._search_regex( title = self._og_search_title(webpage, default=None) or self._search_regex(
r'<span[^>]+class=["\']title["\'][^>]*>([^<]+)', webpage, r'<span[^>]+class=["\']title["\'][^>]*>([^<]+)', webpage,
'title', default=None) or self._html_search_meta( 'title', default=None) or self._html_search_meta(
'description', webpage, 'title', fatal=True) 'description', webpage, 'title', fatal=True)
ext = mimetype2ext(self._search_regex(
r'window\.vt\s*=\s*(["\'])(?P<mimetype>.+?)\1', decoded,
'mimetype', default=None, group='mimetype')) or determine_ext(
video_url, 'mp4')
return { return {
'id': video_id, 'id': video_id,
'title': title, 'title': title,
'ext': ext,
'thumbnail': self._og_search_thumbnail(webpage, default=None), 'thumbnail': self._og_search_thumbnail(webpage, default=None),
'url': video_url, 'url': video_url,
# Seems all videos have extensions in their titles
'ext': determine_ext(title),
} }

View File

@@ -137,13 +137,16 @@ class ORFTVthekIE(InfoExtractor):
class ORFOE1IE(InfoExtractor): class ORFOE1IE(InfoExtractor):
IE_NAME = 'orf:oe1' IE_NAME = 'orf:oe1'
IE_DESC = 'Radio Österreich 1' IE_DESC = 'Radio Österreich 1'
_VALID_URL = r'https?://oe1\.orf\.at/(?:programm/|konsole.*?#\?track_id=)(?P<id>[0-9]+)' _VALID_URL = r'https?://oe1\.orf\.at/(?:programm/|konsole\?.*?\btrack_id=)(?P<id>[0-9]+)'
# Audios on ORF radio are only available for 7 days, so we can't add tests. # Audios on ORF radio are only available for 7 days, so we can't add tests.
_TEST = { _TESTS = [{
'url': 'http://oe1.orf.at/konsole?show=on_demand#?track_id=394211', 'url': 'http://oe1.orf.at/konsole?show=on_demand#?track_id=394211',
'only_matching': True, 'only_matching': True,
} }, {
'url': 'http://oe1.orf.at/konsole?show=ondemand&track_id=443608&load_day=/programm/konsole/tag/20160726',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
show_id = self._match_id(url) show_id = self._match_id(url)

View File

@@ -4,13 +4,13 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_HTTPError
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
determine_ext, determine_ext,
int_or_none, int_or_none,
js_to_json, js_to_json,
strip_jsonp, strip_jsonp,
strip_or_none,
unified_strdate, unified_strdate,
US_RATINGS, US_RATINGS,
) )
@@ -201,7 +201,7 @@ class PBSIE(InfoExtractor):
'id': '2365006249', 'id': '2365006249',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Constitution USA with Peter Sagal - A More Perfect Union', 'title': 'Constitution USA with Peter Sagal - A More Perfect Union',
'description': 'md5:36f341ae62e251b8f5bd2b754b95a071', 'description': 'md5:31b664af3c65fd07fa460d306b837d00',
'duration': 3190, 'duration': 3190,
}, },
}, },
@@ -212,7 +212,7 @@ class PBSIE(InfoExtractor):
'id': '2365297690', 'id': '2365297690',
'ext': 'mp4', 'ext': 'mp4',
'title': 'FRONTLINE - Losing Iraq', 'title': 'FRONTLINE - Losing Iraq',
'description': 'md5:4d3eaa01f94e61b3e73704735f1196d9', 'description': 'md5:5979a4d069b157f622d02bff62fbe654',
'duration': 5050, 'duration': 5050,
}, },
}, },
@@ -223,7 +223,7 @@ class PBSIE(InfoExtractor):
'id': '2201174722', 'id': '2201174722',
'ext': 'mp4', 'ext': 'mp4',
'title': 'PBS NewsHour - Cyber Schools Gain Popularity, but Quality Questions Persist', 'title': 'PBS NewsHour - Cyber Schools Gain Popularity, but Quality Questions Persist',
'description': 'md5:95a19f568689d09a166dff9edada3301', 'description': 'md5:86ab9a3d04458b876147b355788b8781',
'duration': 801, 'duration': 801,
}, },
}, },
@@ -268,7 +268,7 @@ class PBSIE(InfoExtractor):
'display_id': 'player', 'display_id': 'player',
'ext': 'mp4', 'ext': 'mp4',
'title': 'American Experience - Death and the Civil War, Chapter 1', 'title': 'American Experience - Death and the Civil War, Chapter 1',
'description': 'md5:1b80a74e0380ed2a4fb335026de1600d', 'description': 'md5:67fa89a9402e2ee7d08f53b920674c18',
'duration': 682, 'duration': 682,
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
}, },
@@ -294,13 +294,13 @@ class PBSIE(InfoExtractor):
# "<iframe style='position: absolute;<br />\ntop: 0; left: 0;' ...", see # "<iframe style='position: absolute;<br />\ntop: 0; left: 0;' ...", see
# https://github.com/rg3/youtube-dl/issues/7059) # https://github.com/rg3/youtube-dl/issues/7059)
'url': 'http://www.pbs.org/food/features/a-chefs-life-season-3-episode-5-prickly-business/', 'url': 'http://www.pbs.org/food/features/a-chefs-life-season-3-episode-5-prickly-business/',
'md5': '84ced42850d78f1d4650297356e95e6f', 'md5': '59b0ef5009f9ac8a319cc5efebcd865e',
'info_dict': { 'info_dict': {
'id': '2365546844', 'id': '2365546844',
'display_id': 'a-chefs-life-season-3-episode-5-prickly-business', 'display_id': 'a-chefs-life-season-3-episode-5-prickly-business',
'ext': 'mp4', 'ext': 'mp4',
'title': "A Chef's Life - Season 3, Ep. 5: Prickly Business", 'title': "A Chef's Life - Season 3, Ep. 5: Prickly Business",
'description': 'md5:54033c6baa1f9623607c6e2ed245888b', 'description': 'md5:c0ff7475a4b70261c7e58f493c2792a5',
'duration': 1480, 'duration': 1480,
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
}, },
@@ -313,7 +313,7 @@ class PBSIE(InfoExtractor):
'display_id': 'the-atomic-artists', 'display_id': 'the-atomic-artists',
'ext': 'mp4', 'ext': 'mp4',
'title': 'FRONTLINE - The Atomic Artists', 'title': 'FRONTLINE - The Atomic Artists',
'description': 'md5:1a2481e86b32b2e12ec1905dd473e2c1', 'description': 'md5:f677e4520cfacb4a5ce1471e31b57800',
'duration': 723, 'duration': 723,
'thumbnail': 're:^https?://.*\.jpg$', 'thumbnail': 're:^https?://.*\.jpg$',
}, },
@@ -324,7 +324,7 @@ class PBSIE(InfoExtractor):
{ {
# Serves hd only via wigget/partnerplayer page # Serves hd only via wigget/partnerplayer page
'url': 'http://www.pbs.org/video/2365641075/', 'url': 'http://www.pbs.org/video/2365641075/',
'md5': 'acfd4c400b48149a44861cb16dd305cf', 'md5': 'fdf907851eab57211dd589cf12006666',
'info_dict': { 'info_dict': {
'id': '2365641075', 'id': '2365641075',
'ext': 'mp4', 'ext': 'mp4',
@@ -353,11 +353,16 @@ class PBSIE(InfoExtractor):
def _extract_webpage(self, url): def _extract_webpage(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
description = None
presumptive_id = mobj.group('presumptive_id') presumptive_id = mobj.group('presumptive_id')
display_id = presumptive_id display_id = presumptive_id
if presumptive_id: if presumptive_id:
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
description = strip_or_none(self._og_search_description(
webpage, default=None) or self._html_search_meta(
'description', webpage, default=None))
upload_date = unified_strdate(self._search_regex( upload_date = unified_strdate(self._search_regex(
r'<input type="hidden" id="air_date_[0-9]+" value="([^"]+)"', r'<input type="hidden" id="air_date_[0-9]+" value="([^"]+)"',
webpage, 'upload date', default=None)) webpage, 'upload date', default=None))
@@ -370,7 +375,7 @@ class PBSIE(InfoExtractor):
for p in MULTI_PART_REGEXES: for p in MULTI_PART_REGEXES:
tabbed_videos = re.findall(p, webpage) tabbed_videos = re.findall(p, webpage)
if tabbed_videos: if tabbed_videos:
return tabbed_videos, presumptive_id, upload_date return tabbed_videos, presumptive_id, upload_date, description
MEDIA_ID_REGEXES = [ MEDIA_ID_REGEXES = [
r"div\s*:\s*'videoembed'\s*,\s*mediaid\s*:\s*'(\d+)'", # frontline video embed r"div\s*:\s*'videoembed'\s*,\s*mediaid\s*:\s*'(\d+)'", # frontline video embed
@@ -382,7 +387,7 @@ class PBSIE(InfoExtractor):
media_id = self._search_regex( media_id = self._search_regex(
MEDIA_ID_REGEXES, webpage, 'media ID', fatal=False, default=None) MEDIA_ID_REGEXES, webpage, 'media ID', fatal=False, default=None)
if media_id: if media_id:
return media_id, presumptive_id, upload_date return media_id, presumptive_id, upload_date, description
# Fronline video embedded via flp # Fronline video embedded via flp
video_id = self._search_regex( video_id = self._search_regex(
@@ -399,7 +404,7 @@ class PBSIE(InfoExtractor):
'http://www.pbs.org/wgbh/pages/frontline/.json/getdir/getdir%d.json' % prg_id, 'http://www.pbs.org/wgbh/pages/frontline/.json/getdir/getdir%d.json' % prg_id,
presumptive_id, 'Downloading getdir JSON', presumptive_id, 'Downloading getdir JSON',
transform_source=strip_jsonp) transform_source=strip_jsonp)
return getdir['mid'], presumptive_id, upload_date return getdir['mid'], presumptive_id, upload_date, description
for iframe in re.findall(r'(?s)<iframe(.+?)></iframe>', webpage): for iframe in re.findall(r'(?s)<iframe(.+?)></iframe>', webpage):
url = self._search_regex( url = self._search_regex(
@@ -423,10 +428,10 @@ class PBSIE(InfoExtractor):
video_id = mobj.group('id') video_id = mobj.group('id')
display_id = video_id display_id = video_id
return video_id, display_id, None return video_id, display_id, None, description
def _real_extract(self, url): def _real_extract(self, url):
video_id, display_id, upload_date = self._extract_webpage(url) video_id, display_id, upload_date, description = self._extract_webpage(url)
if isinstance(video_id, list): if isinstance(video_id, list):
entries = [self.url_result( entries = [self.url_result(
@@ -448,17 +453,6 @@ class PBSIE(InfoExtractor):
redirects.append(redirect) redirects.append(redirect)
redirect_urls.add(redirect_url) redirect_urls.add(redirect_url)
try:
video_info = self._download_json(
'http://player.pbs.org/videoInfo/%s?format=json&type=partner' % video_id,
display_id, 'Downloading video info JSON')
extract_redirect_urls(video_info)
info = video_info
except ExtractorError as e:
# videoInfo API may not work for some videos
if not isinstance(e.cause, compat_HTTPError) or e.cause.code != 404:
raise
# Player pages may also serve different qualities # Player pages may also serve different qualities
for page in ('widget/partnerplayer', 'portalplayer'): for page in ('widget/partnerplayer', 'portalplayer'):
player = self._download_webpage( player = self._download_webpage(
@@ -511,15 +505,19 @@ class PBSIE(InfoExtractor):
formats)) formats))
if http_url: if http_url:
for m3u8_format in m3u8_formats: for m3u8_format in m3u8_formats:
bitrate = self._search_regex(r'(\d+k)', m3u8_format['url'], 'bitrate', default=None) bitrate = self._search_regex(r'(\d+)k', m3u8_format['url'], 'bitrate', default=None)
# extract only the formats that we know that they will be available as http format. # Lower qualities (150k and 192k) are not available as HTTP formats (see [1]),
# https://projects.pbs.org/confluence/display/coveapi/COVE+Video+Specifications # we won't try extracting them.
if not bitrate or bitrate not in ('400k', '800k', '1200k', '2500k'): # Since summer 2016 higher quality formats (4500k and 6500k) are also available
# albeit they are not documented in [2].
# 1. https://github.com/rg3/youtube-dl/commit/cbc032c8b70a038a69259378c92b4ba97b42d491#commitcomment-17313656
# 2. https://projects.pbs.org/confluence/display/coveapi/COVE+Video+Specifications
if not bitrate or int(bitrate) < 400:
continue continue
f_url = re.sub(r'\d+k|baseline', bitrate, http_url) f_url = re.sub(r'\d+k|baseline', bitrate + 'k', http_url)
# This may produce invalid links sometimes (e.g. # This may produce invalid links sometimes (e.g.
# http://www.pbs.org/wgbh/frontline/film/suicide-plan) # http://www.pbs.org/wgbh/frontline/film/suicide-plan)
if not self._is_valid_url(f_url, display_id, 'http-%s video' % bitrate): if not self._is_valid_url(f_url, display_id, 'http-%sk video' % bitrate):
continue continue
f = m3u8_format.copy() f = m3u8_format.copy()
f.update({ f.update({
@@ -562,11 +560,14 @@ class PBSIE(InfoExtractor):
if alt_title: if alt_title:
info['title'] = alt_title + ' - ' + re.sub(r'^' + alt_title + '[\s\-:]+', '', info['title']) info['title'] = alt_title + ' - ' + re.sub(r'^' + alt_title + '[\s\-:]+', '', info['title'])
description = info.get('description') or info.get(
'program', {}).get('description') or description
return { return {
'id': video_id, 'id': video_id,
'display_id': display_id, 'display_id': display_id,
'title': info['title'], 'title': info['title'],
'description': info.get('description') or info.get('program', {}).get('description'), 'description': description,
'thumbnail': info.get('image_url'), 'thumbnail': info.get('image_url'),
'duration': int_or_none(info.get('duration')), 'duration': int_or_none(info.get('duration')),
'age_limit': age_limit, 'age_limit': age_limit,

View File

@@ -0,0 +1,58 @@
# coding: utf-8
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..utils import (
extract_attributes,
int_or_none,
)
class PokemonIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?pokemon\.com/[a-z]{2}(?:.*?play=(?P<id>[a-z0-9]{32})|/[^/]+/\d+_\d+-(?P<display_id>[^/?#]+))'
_TESTS = [{
'url': 'http://www.pokemon.com/us/pokemon-episodes/19_01-from-a-to-z/?play=true',
'md5': '9fb209ae3a569aac25de0f5afc4ee08f',
'info_dict': {
'id': 'd0436c00c3ce4071ac6cee8130ac54a1',
'ext': 'mp4',
'title': 'From A to Z!',
'description': 'Bonnie makes a new friend, Ash runs into an old friend, and a terrifying premonition begins to unfold!',
'timestamp': 1460478136,
'upload_date': '20160412',
},
'add_id': ['LimelightMedia']
}, {
'url': 'http://www.pokemon.com/uk/pokemon-episodes/?play=2e8b5c761f1d4a9286165d7748c1ece2',
'only_matching': True,
}, {
'url': 'http://www.pokemon.com/fr/episodes-pokemon/18_09-un-hiver-inattendu/',
'only_matching': True,
}, {
'url': 'http://www.pokemon.com/de/pokemon-folgen/01_20-bye-bye-smettbo/',
'only_matching': True,
}]
def _real_extract(self, url):
video_id, display_id = re.match(self._VALID_URL, url).groups()
webpage = self._download_webpage(url, video_id or display_id)
video_data = extract_attributes(self._search_regex(
r'(<[^>]+data-video-id="%s"[^>]*>)' % (video_id if video_id else '[a-z0-9]{32}'),
webpage, 'video data element'))
video_id = video_data['data-video-id']
title = video_data['data-video-title']
return {
'_type': 'url_transparent',
'id': video_id,
'url': 'limelight:media:%s' % video_id,
'title': title,
'description': video_data.get('data-video-summary'),
'thumbnail': video_data.get('data-video-poster'),
'series': 'Pokémon',
'season_number': int_or_none(video_data.get('data-video-season')),
'episode': title,
'episode_number': int_or_none(video_data.get('data-video-episode')),
'ie_key': 'LimelightMedia',
}

View File

@@ -0,0 +1,89 @@
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..compat import compat_urlparse
from ..utils import (
int_or_none,
js_to_json,
parse_filesize,
str_to_int,
)
class PornComIE(InfoExtractor):
_VALID_URL = r'https?://(?:[a-zA-Z]+\.)?porn\.com/videos/(?:(?P<display_id>[^/]+)-)?(?P<id>\d+)'
_TESTS = [{
'url': 'http://www.porn.com/videos/teen-grabs-a-dildo-and-fucks-her-pussy-live-on-1hottie-i-rec-2603339',
'md5': '3f30ce76267533cd12ba999263156de7',
'info_dict': {
'id': '2603339',
'display_id': 'teen-grabs-a-dildo-and-fucks-her-pussy-live-on-1hottie-i-rec',
'ext': 'mp4',
'title': 'Teen grabs a dildo and fucks her pussy live on 1hottie, I rec',
'thumbnail': 're:^https?://.*\.jpg$',
'duration': 551,
'view_count': int,
'age_limit': 18,
},
}, {
'url': 'http://se.porn.com/videos/marsha-may-rides-seth-on-top-of-his-thick-cock-2658067',
'only_matching': True,
}]
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
display_id = mobj.group('display_id') or video_id
webpage = self._download_webpage(url, display_id)
config = self._parse_json(
self._search_regex(
r'=\s*({.+?})\s*,\s*[\da-zA-Z_]+\s*=',
webpage, 'config', default='{}'),
display_id, transform_source=js_to_json, fatal=False)
if config:
title = config['title']
formats = [{
'url': stream['url'],
'format_id': stream.get('id'),
'height': int_or_none(self._search_regex(
r'^(\d+)[pP]', stream.get('id') or '', 'height', default=None))
} for stream in config['streams'] if stream.get('url')]
thumbnail = (compat_urlparse.urljoin(
config['thumbCDN'], config['poster'])
if config.get('thumbCDN') and config.get('poster') else None)
duration = int_or_none(config.get('length'))
else:
title = self._search_regex(
(r'<title>([^<]+)</title>', r'<h1[^>]*>([^<]+)</h1>'),
webpage, 'title')
formats = [{
'url': compat_urlparse.urljoin(url, format_url),
'format_id': '%sp' % height,
'height': int(height),
'filesize_approx': parse_filesize(filesize),
} for format_url, height, filesize in re.findall(
r'<a[^>]+href="(/download/[^"]+)">MPEG4 (\d+)p<span[^>]*>(\d+\s+[a-zA-Z]+)<',
webpage)]
thumbnail = None
duration = None
self._sort_formats(formats)
view_count = str_to_int(self._search_regex(
r'class=["\']views["\'][^>]*><p>([\d,.]+)', webpage, 'view count'))
return {
'id': video_id,
'display_id': display_id,
'title': title,
'thumbnail': thumbnail,
'duration': duration,
'view_count': view_count,
'formats': formats,
'age_limit': 18,
}

View File

@@ -3,10 +3,7 @@ from __future__ import unicode_literals
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import int_or_none
int_or_none,
sanitized_Request,
)
class PornotubeIE(InfoExtractor): class PornotubeIE(InfoExtractor):
@@ -31,59 +28,55 @@ class PornotubeIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
# Fetch origin token token = self._download_json(
js_config = self._download_webpage( 'https://api.aebn.net/auth/v2/origins/authenticate',
'http://www.pornotube.com/assets/src/app/config.js', video_id, video_id, note='Downloading token',
note='Download JS config') data=json.dumps({'credentials': 'Clip Application'}).encode('utf-8'),
originAuthenticationSpaceKey = self._search_regex( headers={
r"constant\('originAuthenticationSpaceKey',\s*'([^']+)'", 'Content-Type': 'application/json',
js_config, 'originAuthenticationSpaceKey') 'Origin': 'http://www.pornotube.com',
})['tokenKey']
# Fetch actual token video_url = self._download_json(
token_req_data = { 'https://api.aebn.net/delivery/v1/clips/%s/MP4' % video_id,
'authenticationSpaceKey': originAuthenticationSpaceKey, video_id, note='Downloading delivery information',
'credentials': 'Clip Application', headers={'Authorization': token})['mediaUrl']
}
token_req = sanitized_Request(
'https://api.aebn.net/auth/v1/token/primal',
data=json.dumps(token_req_data).encode('utf-8'))
token_req.add_header('Content-Type', 'application/json')
token_req.add_header('Origin', 'http://www.pornotube.com')
token_answer = self._download_json(
token_req, video_id, note='Requesting primal token')
token = token_answer['tokenKey']
# Get video URL FIELDS = (
delivery_req = sanitized_Request( 'title', 'description', 'startSecond', 'endSecond', 'publishDate',
'https://api.aebn.net/delivery/v1/clips/%s/MP4' % video_id) 'studios{name}', 'categories{name}', 'movieId', 'primaryImageNumber'
delivery_req.add_header('Authorization', token) )
delivery_info = self._download_json(
delivery_req, video_id, note='Downloading delivery information')
video_url = delivery_info['mediaUrl']
# Get additional info (title etc.)
info_req = sanitized_Request(
'https://api.aebn.net/content/v1/clips/%s?expand='
'title,description,primaryImageNumber,startSecond,endSecond,'
'movie.title,movie.MovieId,movie.boxCoverFront,movie.stars,'
'movie.studios,stars.name,studios.name,categories.name,'
'clipActive,movieActive,publishDate,orientations' % video_id)
info_req.add_header('Authorization', token)
info = self._download_json( info = self._download_json(
info_req, video_id, note='Downloading metadata') 'https://api.aebn.net/content/v2/clips/%s?fields=%s'
% (video_id, ','.join(FIELDS)), video_id,
note='Downloading metadata',
headers={'Authorization': token})
if isinstance(info, list):
info = info[0]
title = info['title']
timestamp = int_or_none(info.get('publishDate'), scale=1000) timestamp = int_or_none(info.get('publishDate'), scale=1000)
uploader = info.get('studios', [{}])[0].get('name') uploader = info.get('studios', [{}])[0].get('name')
movie_id = info['movie']['movieId'] movie_id = info.get('movieId')
thumbnail = 'http://pic.aebn.net/dis/t/%s/%s_%08d.jpg' % ( primary_image_number = info.get('primaryImageNumber')
movie_id, movie_id, info['primaryImageNumber']) thumbnail = None
categories = [c['name'] for c in info.get('categories')] if movie_id and primary_image_number:
thumbnail = 'http://pic.aebn.net/dis/t/%s/%s_%08d.jpg' % (
movie_id, movie_id, primary_image_number)
start = int_or_none(info.get('startSecond'))
end = int_or_none(info.get('endSecond'))
duration = end - start if start and end else None
categories = [c['name'] for c in info.get('categories', []) if c.get('name')]
return { return {
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'title': info['title'], 'title': title,
'description': info.get('description'), 'description': info.get('description'),
'duration': duration,
'timestamp': timestamp, 'timestamp': timestamp,
'uploader': uploader, 'uploader': uploader,
'thumbnail': thumbnail, 'thumbnail': thumbnail,

View File

@@ -1,55 +1,71 @@
# encoding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import json
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
ExtractorError, clean_html,
int_or_none,
unified_timestamp,
update_url_query,
) )
class RBMARadioIE(InfoExtractor): class RBMARadioIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?rbmaradio\.com/shows/(?P<videoID>[^/]+)$' _VALID_URL = r'https?://(?:www\.)?rbmaradio\.com/shows/(?P<show_id>[^/]+)/episodes/(?P<id>[^/?#&]+)'
_TEST = { _TEST = {
'url': 'http://www.rbmaradio.com/shows/ford-lopatin-live-at-primavera-sound-2011', 'url': 'https://www.rbmaradio.com/shows/main-stage/episodes/ford-lopatin-live-at-primavera-sound-2011',
'md5': '6bc6f9bcb18994b4c983bc3bf4384d95', 'md5': '6bc6f9bcb18994b4c983bc3bf4384d95',
'info_dict': { 'info_dict': {
'id': 'ford-lopatin-live-at-primavera-sound-2011', 'id': 'ford-lopatin-live-at-primavera-sound-2011',
'ext': 'mp3', 'ext': 'mp3',
'uploader_id': 'ford-lopatin', 'title': 'Main Stage - Ford & Lopatin',
'location': 'Spain', 'description': 'md5:4f340fb48426423530af5a9d87bd7b91',
'description': 'Joel Ford and Daniel Oneohtrix Point Never Lopatin fly their midified pop extravaganza to Spain. Live at Primavera Sound 2011.', 'thumbnail': 're:^https?://.*\.jpg',
'uploader': 'Ford & Lopatin', 'duration': 2452,
'title': 'Live at Primavera Sound 2011', 'timestamp': 1307103164,
'upload_date': '20110603',
}, },
} }
def _real_extract(self, url): def _real_extract(self, url):
m = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
video_id = m.group('videoID') show_id = mobj.group('show_id')
episode_id = mobj.group('id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, episode_id)
json_data = self._search_regex(r'window\.gon.*?gon\.show=(.+?);$', episode = self._parse_json(
webpage, 'json data', flags=re.MULTILINE) self._search_regex(
r'__INITIAL_STATE__\s*=\s*({.+?})\s*</script>',
webpage, 'json data'),
episode_id)['episodes'][show_id][episode_id]
try: title = episode['title']
data = json.loads(json_data)
except ValueError as e:
raise ExtractorError('Invalid JSON: ' + str(e))
video_url = data['akamai_url'] + '&cbr=256' show_title = episode.get('showTitle')
if show_title:
title = '%s - %s' % (show_title, title)
formats = [{
'url': update_url_query(episode['audioURL'], query={'cbr': abr}),
'format_id': compat_str(abr),
'abr': abr,
'vcodec': 'none',
} for abr in (96, 128, 256)]
description = clean_html(episode.get('longTeaser'))
thumbnail = self._proto_relative_url(episode.get('imageURL', {}).get('landscape'))
duration = int_or_none(episode.get('duration'))
timestamp = unified_timestamp(episode.get('publishedAt'))
return { return {
'id': video_id, 'id': episode_id,
'url': video_url, 'title': title,
'title': data['title'], 'description': description,
'description': data.get('teaser_text'), 'thumbnail': thumbnail,
'location': data.get('country_of_origin'), 'duration': duration,
'uploader': data.get('host', {}).get('name'), 'timestamp': timestamp,
'uploader_id': data.get('host', {}).get('slug'), 'formats': formats,
'thumbnail': data.get('image', {}).get('large_url_2x'),
'duration': data.get('duration'),
} }

View File

@@ -0,0 +1,50 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
int_or_none,
remove_start,
)
class RozhlasIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?prehravac\.rozhlas\.cz/audio/(?P<id>[0-9]+)'
_TESTS = [{
'url': 'http://prehravac.rozhlas.cz/audio/3421320',
'md5': '504c902dbc9e9a1fd50326eccf02a7e2',
'info_dict': {
'id': '3421320',
'ext': 'mp3',
'title': 'Echo Pavla Klusáka (30.06.2015 21:00)',
'description': 'Osmdesátiny Terryho Rileyho jsou skvělou příležitostí proletět se elektronickými i akustickými díly zakladatatele minimalismu, který je aktivní už přes padesát let'
}
}, {
'url': 'http://prehravac.rozhlas.cz/audio/3421320/embed',
'skip_download': True,
}]
def _real_extract(self, url):
audio_id = self._match_id(url)
webpage = self._download_webpage(
'http://prehravac.rozhlas.cz/audio/%s' % audio_id, audio_id)
title = self._html_search_regex(
r'<h3>(.+?)</h3>\s*<p[^>]*>.*?</p>\s*<div[^>]+id=["\']player-track',
webpage, 'title', default=None) or remove_start(
self._og_search_title(webpage), 'Radio Wave - ')
description = self._html_search_regex(
r'<p[^>]+title=(["\'])(?P<url>(?:(?!\1).)+)\1[^>]*>.*?</p>\s*<div[^>]+id=["\']player-track',
webpage, 'description', fatal=False, group='url')
duration = int_or_none(self._search_regex(
r'data-duration=["\'](\d+)', webpage, 'duration', default=None))
return {
'id': audio_id,
'url': 'http://media.rozhlas.cz/_audio/%s.mp3' % audio_id,
'title': title,
'description': description,
'duration': duration,
'vcodec': 'none',
}

View File

@@ -14,7 +14,7 @@ class RtlNlIE(InfoExtractor):
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
https?://(?:www\.)? https?://(?:www\.)?
(?: (?:
rtlxl\.nl/\#!/[^/]+/| rtlxl\.nl/[^\#]*\#!/[^/]+/|
rtl\.nl/system/videoplayer/(?:[^/]+/)+(?:video_)?embed\.html\b.+?\buuid= rtl\.nl/system/videoplayer/(?:[^/]+/)+(?:video_)?embed\.html\b.+?\buuid=
) )
(?P<id>[0-9a-f-]+)''' (?P<id>[0-9a-f-]+)'''
@@ -67,6 +67,9 @@ class RtlNlIE(InfoExtractor):
}, { }, {
'url': 'http://www.rtl.nl/system/videoplayer/derden/embed.html#!/uuid=bb0353b0-d6a4-1dad-90e9-18fe75b8d1f0', 'url': 'http://www.rtl.nl/system/videoplayer/derden/embed.html#!/uuid=bb0353b0-d6a4-1dad-90e9-18fe75b8d1f0',
'only_matching': True, 'only_matching': True,
}, {
'url': 'http://rtlxl.nl/?_ga=1.204735956.572365465.1466978370#!/rtl-nieuws-132237/3c487912-023b-49ac-903e-2c5d79f8410f',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -113,6 +113,8 @@ class RTVEALaCartaIE(InfoExtractor):
png = self._download_webpage(png_request, video_id, 'Downloading url information') png = self._download_webpage(png_request, video_id, 'Downloading url information')
video_url = _decrypt_url(png) video_url = _decrypt_url(png)
if not video_url.endswith('.f4m'): if not video_url.endswith('.f4m'):
if '?' not in video_url:
video_url = video_url.replace('resources/', 'auth/resources/')
video_url = video_url.replace('.net.rtve', '.multimedia.cdn.rtve') video_url = video_url.replace('.net.rtve', '.multimedia.cdn.rtve')
subtitles = None subtitles = None

View File

@@ -75,7 +75,7 @@ class SafariBaseIE(InfoExtractor):
class SafariIE(SafariBaseIE): class SafariIE(SafariBaseIE):
IE_NAME = 'safari' IE_NAME = 'safari'
IE_DESC = 'safaribooksonline.com online video' IE_DESC = 'safaribooksonline.com online video'
_VALID_URL = r'https?://(?:www\.)?safaribooksonline\.com/library/view/[^/]+/(?P<course_id>[^/]+)/(?P<part>part\d+)\.html' _VALID_URL = r'https?://(?:www\.)?safaribooksonline\.com/library/view/[^/]+/(?P<course_id>[^/]+)/(?P<part>[^/?#&]+)\.html'
_TESTS = [{ _TESTS = [{
'url': 'https://www.safaribooksonline.com/library/view/hadoop-fundamentals-livelessons/9780133392838/part00.html', 'url': 'https://www.safaribooksonline.com/library/view/hadoop-fundamentals-livelessons/9780133392838/part00.html',
@@ -92,6 +92,9 @@ class SafariIE(SafariBaseIE):
# non-digits in course id # non-digits in course id
'url': 'https://www.safaribooksonline.com/library/view/create-a-nodejs/100000006A0210/part00.html', 'url': 'https://www.safaribooksonline.com/library/view/create-a-nodejs/100000006A0210/part00.html',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.safaribooksonline.com/library/view/learning-path-red/9780134664057/RHCE_Introduction.html',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -132,12 +135,15 @@ class SafariIE(SafariBaseIE):
class SafariApiIE(SafariBaseIE): class SafariApiIE(SafariBaseIE):
IE_NAME = 'safari:api' IE_NAME = 'safari:api'
_VALID_URL = r'https?://(?:www\.)?safaribooksonline\.com/api/v1/book/(?P<course_id>[^/]+)/chapter(?:-content)?/(?P<part>part\d+)\.html' _VALID_URL = r'https?://(?:www\.)?safaribooksonline\.com/api/v1/book/(?P<course_id>[^/]+)/chapter(?:-content)?/(?P<part>[^/?#&]+)\.html'
_TEST = { _TESTS = [{
'url': 'https://www.safaribooksonline.com/api/v1/book/9780133392838/chapter/part00.html', 'url': 'https://www.safaribooksonline.com/api/v1/book/9780133392838/chapter/part00.html',
'only_matching': True, 'only_matching': True,
} }, {
'url': 'https://www.safaribooksonline.com/api/v1/book/9780134664057/chapter/RHCE_Introduction.html',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)

View File

@@ -4,33 +4,43 @@ from __future__ import unicode_literals
import re import re
from .jwplatform import JWPlatformBaseIE from .jwplatform import JWPlatformBaseIE
from ..compat import compat_parse_qs
from ..utils import ( from ..utils import (
ExtractorError, float_or_none,
parse_duration, parse_iso8601,
update_url_query,
) )
class SendtoNewsIE(JWPlatformBaseIE): class SendtoNewsIE(JWPlatformBaseIE):
_VALID_URL = r'https?://embed\.sendtonews\.com/player/embed\.php\?(?P<query>[^#]+)' _VALID_URL = r'https?://embed\.sendtonews\.com/player2/embedplayer\.php\?.*\bSC=(?P<id>[0-9A-Za-z-]+)'
_TEST = { _TEST = {
# From http://cleveland.cbslocal.com/2016/05/16/indians-score-season-high-15-runs-in-blowout-win-over-reds-rapid-reaction/ # From http://cleveland.cbslocal.com/2016/05/16/indians-score-season-high-15-runs-in-blowout-win-over-reds-rapid-reaction/
'url': 'http://embed.sendtonews.com/player/embed.php?SK=GxfCe0Zo7D&MK=175909&PK=5588&autoplay=on&sound=yes', 'url': 'http://embed.sendtonews.com/player2/embedplayer.php?SC=GxfCe0Zo7D-175909-5588&type=single&autoplay=on&sound=YES',
'info_dict': { 'info_dict': {
'id': 'GxfCe0Zo7D-175909-5588', 'id': 'GxfCe0Zo7D-175909-5588'
'ext': 'mp4',
'title': 'Recap: CLE 15, CIN 6',
'description': '5/16/16: Indians\' bats explode for 15 runs in a win',
'duration': 49,
}, },
'playlist_count': 9,
# test the first video only to prevent lengthy tests
'playlist': [{
'info_dict': {
'id': '198180',
'ext': 'mp4',
'title': 'Recap: CLE 5, LAA 4',
'description': '8/14/16: Naquin, Almonte lead Indians in 5-4 win',
'duration': 57.343,
'thumbnail': 're:https?://.*\.jpg$',
'upload_date': '20160815',
'timestamp': 1471221961,
},
}],
'params': { 'params': {
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
}, },
} }
_URL_TEMPLATE = '//embed.sendtonews.com/player/embed.php?SK=%s&MK=%s&PK=%s' _URL_TEMPLATE = '//embed.sendtonews.com/player2/embedplayer.php?SC=%s'
@classmethod @classmethod
def _extract_url(cls, webpage): def _extract_url(cls, webpage):
@@ -39,48 +49,41 @@ class SendtoNewsIE(JWPlatformBaseIE):
.*\bSC=(?P<SC>[0-9a-zA-Z-]+).* .*\bSC=(?P<SC>[0-9a-zA-Z-]+).*
\1>''', webpage) \1>''', webpage)
if mobj: if mobj:
sk, mk, pk = mobj.group('SC').split('-') sc = mobj.group('SC')
return cls._URL_TEMPLATE % (sk, mk, pk) return cls._URL_TEMPLATE % sc
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) playlist_id = self._match_id(url)
params = compat_parse_qs(mobj.group('query'))
if 'SK' not in params or 'MK' not in params or 'PK' not in params: data_url = update_url_query(
raise ExtractorError('Invalid URL', expected=True) url.replace('embedplayer.php', 'data_read.php'),
{'cmd': 'loadInitial'})
playlist_data = self._download_json(data_url, playlist_id)
video_id = '-'.join([params['SK'][0], params['MK'][0], params['PK'][0]]) entries = []
for video in playlist_data['playlistData'][0]:
info_dict = self._parse_jwplayer_data(
video['jwconfiguration'],
require_title=False, rtmp_params={'no_resume': True})
webpage = self._download_webpage(url, video_id) thumbnails = []
if video.get('thumbnailUrl'):
thumbnails.append({
'id': 'normal',
'url': video['thumbnailUrl'],
})
if video.get('smThumbnailUrl'):
thumbnails.append({
'id': 'small',
'url': video['smThumbnailUrl'],
})
info_dict.update({
'title': video['S_headLine'],
'description': video.get('S_fullStory'),
'thumbnails': thumbnails,
'duration': float_or_none(video.get('SM_length')),
'timestamp': parse_iso8601(video.get('S_sysDate'), delimiter=' '),
})
entries.append(info_dict)
jwplayer_data_str = self._search_regex( return self.playlist_result(entries, playlist_id)
r'jwplayer\("[^"]+"\)\.setup\((.+?)\);', webpage, 'JWPlayer data')
js_vars = {
'w': 1024,
'h': 768,
'modeVar': 'html5',
}
for name, val in js_vars.items():
js_val = '%d' % val if isinstance(val, int) else '"%s"' % val
jwplayer_data_str = jwplayer_data_str.replace(':%s,' % name, ':%s,' % js_val)
info_dict = self._parse_jwplayer_data(
self._parse_json(jwplayer_data_str, video_id),
video_id, require_title=False, rtmp_params={'no_resume': True})
title = self._html_search_regex(
r'<div[^>]+class="embedTitle">([^<]+)</div>', webpage, 'title')
description = self._html_search_regex(
r'<div[^>]+class="embedSubTitle">([^<]+)</div>', webpage,
'description', fatal=False)
duration = parse_duration(self._html_search_regex(
r'<div[^>]+class="embedDetails">([0-9:]+)', webpage,
'duration', fatal=False))
info_dict.update({
'title': title,
'description': description,
'duration': duration,
})
return info_dict

View File

@@ -6,7 +6,6 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
sanitized_Request,
urlencode_postdata, urlencode_postdata,
) )
@@ -37,28 +36,33 @@ class SharedIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
webpage, urlh = self._download_webpage_handle(url, video_id)
if '>File does not exist<' in webpage: if '>File does not exist<' in webpage:
raise ExtractorError( raise ExtractorError(
'Video %s does not exist' % video_id, expected=True) 'Video %s does not exist' % video_id, expected=True)
download_form = self._hidden_inputs(webpage) download_form = self._hidden_inputs(webpage)
request = sanitized_Request(
url, urlencode_postdata(download_form))
request.add_header('Content-Type', 'application/x-www-form-urlencoded')
video_page = self._download_webpage( video_page = self._download_webpage(
request, video_id, 'Downloading video page') urlh.geturl(), video_id, 'Downloading video page',
data=urlencode_postdata(download_form),
headers={
'Content-Type': 'application/x-www-form-urlencoded',
'Referer': urlh.geturl(),
})
video_url = self._html_search_regex( video_url = self._html_search_regex(
r'data-url="([^"]+)"', video_page, 'video URL') r'data-url=(["\'])(?P<url>(?:(?!\1).)+)\1',
video_page, 'video URL', group='url')
title = base64.b64decode(self._html_search_meta( title = base64.b64decode(self._html_search_meta(
'full:title', webpage, 'title').encode('utf-8')).decode('utf-8') 'full:title', webpage, 'title').encode('utf-8')).decode('utf-8')
filesize = int_or_none(self._html_search_meta( filesize = int_or_none(self._html_search_meta(
'full:size', webpage, 'file size', fatal=False)) 'full:size', webpage, 'file size', fatal=False))
thumbnail = self._html_search_regex( thumbnail = self._html_search_regex(
r'data-poster="([^"]+)"', video_page, 'thumbnail', default=None) r'data-poster=(["\'])(?P<url>(?:(?!\1).)+)\1',
video_page, 'thumbnail', default=None, group='url')
return { return {
'id': video_id, 'id': video_id,

View File

@@ -13,20 +13,21 @@ from ..utils import (
sanitized_Request, sanitized_Request,
unified_strdate, unified_strdate,
urlencode_postdata, urlencode_postdata,
xpath_text,
) )
class SmotriIE(InfoExtractor): class SmotriIE(InfoExtractor):
IE_DESC = 'Smotri.com' IE_DESC = 'Smotri.com'
IE_NAME = 'smotri' IE_NAME = 'smotri'
_VALID_URL = r'^https?://(?:www\.)?(?:smotri\.com/video/view/\?id=|pics\.smotri\.com/(?:player|scrubber_custom8)\.swf\?file=)(?P<id>v(?P<realvideoid>[0-9]+)[a-z0-9]{4})' _VALID_URL = r'https?://(?:www\.)?(?:smotri\.com/video/view/\?id=|pics\.smotri\.com/(?:player|scrubber_custom8)\.swf\?file=)(?P<id>v(?P<realvideoid>[0-9]+)[a-z0-9]{4})'
_NETRC_MACHINE = 'smotri' _NETRC_MACHINE = 'smotri'
_TESTS = [ _TESTS = [
# real video id 2610366 # real video id 2610366
{ {
'url': 'http://smotri.com/video/view/?id=v261036632ab', 'url': 'http://smotri.com/video/view/?id=v261036632ab',
'md5': '2a7b08249e6f5636557579c368040eb9', 'md5': '02c0dfab2102984e9c5bb585cc7cc321',
'info_dict': { 'info_dict': {
'id': 'v261036632ab', 'id': 'v261036632ab',
'ext': 'mp4', 'ext': 'mp4',
@@ -174,11 +175,11 @@ class SmotriIE(InfoExtractor):
if video_password: if video_password:
video_form['pass'] = hashlib.md5(video_password.encode('utf-8')).hexdigest() video_form['pass'] = hashlib.md5(video_password.encode('utf-8')).hexdigest()
request = sanitized_Request( video = self._download_json(
'http://smotri.com/video/view/url/bot/', urlencode_postdata(video_form)) 'http://smotri.com/video/view/url/bot/',
request.add_header('Content-Type', 'application/x-www-form-urlencoded') video_id, 'Downloading video JSON',
data=urlencode_postdata(video_form),
video = self._download_json(request, video_id, 'Downloading video JSON') headers={'Content-Type': 'application/x-www-form-urlencoded'})
video_url = video.get('_vidURL') or video.get('_vidURL_mp4') video_url = video.get('_vidURL') or video.get('_vidURL_mp4')
@@ -196,11 +197,11 @@ class SmotriIE(InfoExtractor):
raise ExtractorError(msg, expected=True) raise ExtractorError(msg, expected=True)
title = video['title'] title = video['title']
thumbnail = video['_imgURL'] thumbnail = video.get('_imgURL')
upload_date = unified_strdate(video['added']) upload_date = unified_strdate(video.get('added'))
uploader = video['userNick'] uploader = video.get('userNick')
uploader_id = video['userLogin'] uploader_id = video.get('userLogin')
duration = int_or_none(video['duration']) duration = int_or_none(video.get('duration'))
# Video JSON does not provide enough meta data # Video JSON does not provide enough meta data
# We will extract some from the video web page instead # We will extract some from the video web page instead
@@ -209,7 +210,7 @@ class SmotriIE(InfoExtractor):
# Warning if video is unavailable # Warning if video is unavailable
warning = self._html_search_regex( warning = self._html_search_regex(
r'<div class="videoUnModer">(.*?)</div>', webpage, r'<div[^>]+class="videoUnModer"[^>]*>(.+?)</div>', webpage,
'warning message', default=None) 'warning message', default=None)
if warning is not None: if warning is not None:
self._downloader.report_warning( self._downloader.report_warning(
@@ -217,20 +218,22 @@ class SmotriIE(InfoExtractor):
(video_id, warning)) (video_id, warning))
# Adult content # Adult content
if re.search('EroConfirmText">', webpage) is not None: if 'EroConfirmText">' in webpage:
self.report_age_confirmation() self.report_age_confirmation()
confirm_string = self._html_search_regex( confirm_string = self._html_search_regex(
r'<a href="/video/view/\?id=%s&confirm=([^"]+)" title="[^"]+">' % video_id, r'<a[^>]+href="/video/view/\?id=%s&confirm=([^"]+)"' % video_id,
webpage, 'confirm string') webpage, 'confirm string')
confirm_url = webpage_url + '&confirm=%s' % confirm_string confirm_url = webpage_url + '&confirm=%s' % confirm_string
webpage = self._download_webpage(confirm_url, video_id, 'Downloading video page (age confirmed)') webpage = self._download_webpage(
confirm_url, video_id,
'Downloading video page (age confirmed)')
adult_content = True adult_content = True
else: else:
adult_content = False adult_content = False
view_count = self._html_search_regex( view_count = self._html_search_regex(
'Общее количество просмотров.*?<span class="Number">(\\d+)</span>', r'(?s)Общее количество просмотров.*?<span class="Number">(\d+)</span>',
webpage, 'view count', fatal=False, flags=re.MULTILINE | re.DOTALL) webpage, 'view count', fatal=False)
return { return {
'id': video_id, 'id': video_id,
@@ -249,37 +252,33 @@ class SmotriIE(InfoExtractor):
class SmotriCommunityIE(InfoExtractor): class SmotriCommunityIE(InfoExtractor):
IE_DESC = 'Smotri.com community videos' IE_DESC = 'Smotri.com community videos'
IE_NAME = 'smotri:community' IE_NAME = 'smotri:community'
_VALID_URL = r'^https?://(?:www\.)?smotri\.com/community/video/(?P<communityid>[0-9A-Za-z_\'-]+)' _VALID_URL = r'https?://(?:www\.)?smotri\.com/community/video/(?P<id>[0-9A-Za-z_\'-]+)'
_TEST = { _TEST = {
'url': 'http://smotri.com/community/video/kommuna', 'url': 'http://smotri.com/community/video/kommuna',
'info_dict': { 'info_dict': {
'id': 'kommuna', 'id': 'kommuna',
'title': 'КПРФ',
}, },
'playlist_mincount': 4, 'playlist_mincount': 4,
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) community_id = self._match_id(url)
community_id = mobj.group('communityid')
url = 'http://smotri.com/export/rss/video/by/community/-/%s/video.xml' % community_id rss = self._download_xml(
rss = self._download_xml(url, community_id, 'Downloading community RSS') 'http://smotri.com/export/rss/video/by/community/-/%s/video.xml' % community_id,
community_id, 'Downloading community RSS')
entries = [self.url_result(video_url.text, 'Smotri') entries = [
for video_url in rss.findall('./channel/item/link')] self.url_result(video_url.text, SmotriIE.ie_key())
for video_url in rss.findall('./channel/item/link')]
description_text = rss.find('./channel/description').text return self.playlist_result(entries, community_id)
community_title = self._html_search_regex(
'^Видео сообщества "([^"]+)"$', description_text, 'community title')
return self.playlist_result(entries, community_id, community_title)
class SmotriUserIE(InfoExtractor): class SmotriUserIE(InfoExtractor):
IE_DESC = 'Smotri.com user videos' IE_DESC = 'Smotri.com user videos'
IE_NAME = 'smotri:user' IE_NAME = 'smotri:user'
_VALID_URL = r'^https?://(?:www\.)?smotri\.com/user/(?P<userid>[0-9A-Za-z_\'-]+)' _VALID_URL = r'https?://(?:www\.)?smotri\.com/user/(?P<id>[0-9A-Za-z_\'-]+)'
_TESTS = [{ _TESTS = [{
'url': 'http://smotri.com/user/inspector', 'url': 'http://smotri.com/user/inspector',
'info_dict': { 'info_dict': {
@@ -290,19 +289,19 @@ class SmotriUserIE(InfoExtractor):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) user_id = self._match_id(url)
user_id = mobj.group('userid')
url = 'http://smotri.com/export/rss/user/video/-/%s/video.xml' % user_id rss = self._download_xml(
rss = self._download_xml(url, user_id, 'Downloading user RSS') 'http://smotri.com/export/rss/user/video/-/%s/video.xml' % user_id,
user_id, 'Downloading user RSS')
entries = [self.url_result(video_url.text, 'Smotri') entries = [self.url_result(video_url.text, 'Smotri')
for video_url in rss.findall('./channel/item/link')] for video_url in rss.findall('./channel/item/link')]
description_text = rss.find('./channel/description').text description_text = xpath_text(rss, './channel/description') or ''
user_nickname = self._html_search_regex( user_nickname = self._search_regex(
'^Видео режиссера (.*)$', description_text, '^Видео режиссера (.+)$', description_text,
'user nickname') 'user nickname', fatal=False)
return self.playlist_result(entries, user_id, user_nickname) return self.playlist_result(entries, user_id, user_nickname)
@@ -310,11 +309,11 @@ class SmotriUserIE(InfoExtractor):
class SmotriBroadcastIE(InfoExtractor): class SmotriBroadcastIE(InfoExtractor):
IE_DESC = 'Smotri.com broadcasts' IE_DESC = 'Smotri.com broadcasts'
IE_NAME = 'smotri:broadcast' IE_NAME = 'smotri:broadcast'
_VALID_URL = r'^https?://(?:www\.)?(?P<url>smotri\.com/live/(?P<broadcastid>[^/]+))/?.*' _VALID_URL = r'https?://(?:www\.)?(?P<url>smotri\.com/live/(?P<id>[^/]+))/?.*'
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
broadcast_id = mobj.group('broadcastid') broadcast_id = mobj.group('id')
broadcast_url = 'http://' + mobj.group('url') broadcast_url = 'http://' + mobj.group('url')
broadcast_page = self._download_webpage(broadcast_url, broadcast_id, 'Downloading broadcast page') broadcast_page = self._download_webpage(broadcast_url, broadcast_id, 'Downloading broadcast page')
@@ -328,7 +327,8 @@ class SmotriBroadcastIE(InfoExtractor):
(username, password) = self._get_login_info() (username, password) = self._get_login_info()
if username is None: if username is None:
self.raise_login_required('Erotic broadcasts allowed only for registered users') self.raise_login_required(
'Erotic broadcasts allowed only for registered users')
login_form = { login_form = {
'login-hint53': '1', 'login-hint53': '1',
@@ -343,8 +343,9 @@ class SmotriBroadcastIE(InfoExtractor):
broadcast_page = self._download_webpage( broadcast_page = self._download_webpage(
request, broadcast_id, 'Logging in and confirming age') request, broadcast_id, 'Logging in and confirming age')
if re.search('>Неверный логин или пароль<', broadcast_page) is not None: if '>Неверный логин или пароль<' in broadcast_page:
raise ExtractorError('Unable to log in: bad username or password', expected=True) raise ExtractorError(
'Unable to log in: bad username or password', expected=True)
adult_content = True adult_content = True
else: else:
@@ -383,11 +384,11 @@ class SmotriBroadcastIE(InfoExtractor):
broadcast_playpath = broadcast_json['_streamName'] broadcast_playpath = broadcast_json['_streamName']
broadcast_app = '%s/%s' % (mobj.group('app'), broadcast_json['_vidURL']) broadcast_app = '%s/%s' % (mobj.group('app'), broadcast_json['_vidURL'])
broadcast_thumbnail = broadcast_json['_imgURL'] broadcast_thumbnail = broadcast_json.get('_imgURL')
broadcast_title = self._live_title(broadcast_json['title']) broadcast_title = self._live_title(broadcast_json['title'])
broadcast_description = broadcast_json['description'] broadcast_description = broadcast_json.get('description')
broadcaster_nick = broadcast_json['nick'] broadcaster_nick = broadcast_json.get('nick')
broadcaster_login = broadcast_json['login'] broadcaster_login = broadcast_json.get('login')
rtmp_conn = 'S:%s' % uuid.uuid4().hex rtmp_conn = 'S:%s' % uuid.uuid4().hex
except KeyError: except KeyError:
if protected_broadcast: if protected_broadcast:

View File

@@ -14,10 +14,10 @@ from ..utils import ExtractorError
class SohuIE(InfoExtractor): class SohuIE(InfoExtractor):
_VALID_URL = r'https?://(?P<mytv>my\.)?tv\.sohu\.com/.+?/(?(mytv)|n)(?P<id>\d+)\.shtml.*?' _VALID_URL = r'https?://(?P<mytv>my\.)?tv\.sohu\.com/.+?/(?(mytv)|n)(?P<id>\d+)\.shtml.*?'
# Sohu videos give different MD5 sums on Travis CI and my machine
_TESTS = [{ _TESTS = [{
'note': 'This video is available only in Mainland China', 'note': 'This video is available only in Mainland China',
'url': 'http://tv.sohu.com/20130724/n382479172.shtml#super', 'url': 'http://tv.sohu.com/20130724/n382479172.shtml#super',
'md5': '29175c8cadd8b5cc4055001e85d6b372',
'info_dict': { 'info_dict': {
'id': '382479172', 'id': '382479172',
'ext': 'mp4', 'ext': 'mp4',
@@ -26,7 +26,6 @@ class SohuIE(InfoExtractor):
'skip': 'On available in China', 'skip': 'On available in China',
}, { }, {
'url': 'http://tv.sohu.com/20150305/n409385080.shtml', 'url': 'http://tv.sohu.com/20150305/n409385080.shtml',
'md5': '699060e75cf58858dd47fb9c03c42cfb',
'info_dict': { 'info_dict': {
'id': '409385080', 'id': '409385080',
'ext': 'mp4', 'ext': 'mp4',
@@ -34,7 +33,6 @@ class SohuIE(InfoExtractor):
} }
}, { }, {
'url': 'http://my.tv.sohu.com/us/232799889/78693464.shtml', 'url': 'http://my.tv.sohu.com/us/232799889/78693464.shtml',
'md5': '9bf34be48f2f4dadcb226c74127e203c',
'info_dict': { 'info_dict': {
'id': '78693464', 'id': '78693464',
'ext': 'mp4', 'ext': 'mp4',
@@ -48,7 +46,6 @@ class SohuIE(InfoExtractor):
'title': '【神探苍实战秘籍】第13期 战争之影 赫卡里姆', 'title': '【神探苍实战秘籍】第13期 战争之影 赫卡里姆',
}, },
'playlist': [{ 'playlist': [{
'md5': 'bdbfb8f39924725e6589c146bc1883ad',
'info_dict': { 'info_dict': {
'id': '78910339_part1', 'id': '78910339_part1',
'ext': 'mp4', 'ext': 'mp4',
@@ -56,7 +53,6 @@ class SohuIE(InfoExtractor):
'title': '【神探苍实战秘籍】第13期 战争之影 赫卡里姆', 'title': '【神探苍实战秘籍】第13期 战争之影 赫卡里姆',
} }
}, { }, {
'md5': '3e1f46aaeb95354fd10e7fca9fc1804e',
'info_dict': { 'info_dict': {
'id': '78910339_part2', 'id': '78910339_part2',
'ext': 'mp4', 'ext': 'mp4',
@@ -64,7 +60,6 @@ class SohuIE(InfoExtractor):
'title': '【神探苍实战秘籍】第13期 战争之影 赫卡里姆', 'title': '【神探苍实战秘籍】第13期 战争之影 赫卡里姆',
} }
}, { }, {
'md5': '8407e634175fdac706766481b9443450',
'info_dict': { 'info_dict': {
'id': '78910339_part3', 'id': '78910339_part3',
'ext': 'mp4', 'ext': 'mp4',

View File

@@ -0,0 +1,34 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
class SonyLIVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?sonyliv\.com/details/[^/]+/(?P<id>\d+)'
_TESTS = [{
'url': "http://www.sonyliv.com/details/episodes/5024612095001/Ep.-1---Achaari-Cheese-Toast---Bachelor's-Delight",
'info_dict': {
'title': "Ep. 1 - Achaari Cheese Toast - Bachelor's Delight",
'id': '5024612095001',
'ext': 'mp4',
'upload_date': '20160707',
'description': 'md5:7f28509a148d5be9d0782b4d5106410d',
'uploader_id': '4338955589001',
'timestamp': 1467870968,
},
'params': {
'skip_download': True,
},
'add_ie': ['BrightcoveNew'],
}, {
'url': 'http://www.sonyliv.com/details/full%20movie/4951168986001/Sei-Raat-(Bangla)',
'only_matching': True,
}]
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/4338955589001/default_default/index.html?videoId=%s'
def _real_extract(self, url):
brightcove_id = self._match_id(url)
return self.url_result(
self.BRIGHTCOVE_URL_TEMPLATE % brightcove_id, 'BrightcoveNew', brightcove_id)

View File

@@ -119,6 +119,12 @@ class SoundcloudIE(InfoExtractor):
_CLIENT_ID = '02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea' _CLIENT_ID = '02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea'
_IPHONE_CLIENT_ID = '376f225bf427445fc4bfb6b99b72e0bf' _IPHONE_CLIENT_ID = '376f225bf427445fc4bfb6b99b72e0bf'
@staticmethod
def _extract_urls(webpage):
return [m.group('url') for m in re.finditer(
r'<iframe[^>]+src=(["\'])(?P<url>(?:https?://)?(?:w\.)?soundcloud\.com/player.+?)\1',
webpage)]
def report_resolve(self, video_id): def report_resolve(self, video_id):
"""Report information extraction.""" """Report information extraction."""
self.to_screen('%s: Resolving id' % video_id) self.to_screen('%s: Resolving id' % video_id)

View File

@@ -17,6 +17,8 @@ class SouthParkIE(MTVServicesInfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'South Park|Bat Daded', 'title': 'South Park|Bat Daded',
'description': 'Randy disqualifies South Park by getting into a fight with Bat Dad.', 'description': 'Randy disqualifies South Park by getting into a fight with Bat Dad.',
'timestamp': 1112760000,
'upload_date': '20050406',
}, },
}] }]
@@ -28,6 +30,10 @@ class SouthParkEsIE(SouthParkIE):
_TESTS = [{ _TESTS = [{
'url': 'http://southpark.cc.com/episodios-en-espanol/s01e01-cartman-consigue-una-sonda-anal#source=351c1323-0b96-402d-a8b9-40d01b2e9bde&position=1&sort=!airdate', 'url': 'http://southpark.cc.com/episodios-en-espanol/s01e01-cartman-consigue-una-sonda-anal#source=351c1323-0b96-402d-a8b9-40d01b2e9bde&position=1&sort=!airdate',
'info_dict': {
'title': 'Cartman Consigue Una Sonda Anal',
'description': 'Cartman Consigue Una Sonda Anal',
},
'playlist_count': 4, 'playlist_count': 4,
}] }]
@@ -42,17 +48,27 @@ class SouthParkDeIE(SouthParkIE):
'info_dict': { 'info_dict': {
'id': '85487c96-b3b9-4e39-9127-ad88583d9bf2', 'id': '85487c96-b3b9-4e39-9127-ad88583d9bf2',
'ext': 'mp4', 'ext': 'mp4',
'title': 'The Government Won\'t Respect My Privacy', 'title': 'South Park|The Government Won\'t Respect My Privacy',
'description': 'Cartman explains the benefits of "Shitter" to Stan, Kyle and Craig.', 'description': 'Cartman explains the benefits of "Shitter" to Stan, Kyle and Craig.',
'timestamp': 1380160800,
'upload_date': '20130926',
}, },
}, { }, {
# non-ASCII characters in initial URL # non-ASCII characters in initial URL
'url': 'http://www.southpark.de/alle-episoden/s18e09-hashtag-aufwärmen', 'url': 'http://www.southpark.de/alle-episoden/s18e09-hashtag-aufwärmen',
'playlist_count': 4, 'info_dict': {
'title': 'Hashtag „Aufwärmen“',
'description': 'Kyle will mit seinem kleinen Bruder Ike Videospiele spielen. Als der nicht mehr mit ihm spielen will, hat Kyle Angst, dass er die Kids von heute nicht mehr versteht.',
},
'playlist_count': 3,
}, { }, {
# non-ASCII characters in redirect URL # non-ASCII characters in redirect URL
'url': 'http://www.southpark.de/alle-episoden/s18e09', 'url': 'http://www.southpark.de/alle-episoden/s18e09',
'playlist_count': 4, 'info_dict': {
'title': 'Hashtag „Aufwärmen“',
'description': 'Kyle will mit seinem kleinen Bruder Ike Videospiele spielen. Als der nicht mehr mit ihm spielen will, hat Kyle Angst, dass er die Kids von heute nicht mehr versteht.',
},
'playlist_count': 3,
}] }]
@@ -63,7 +79,11 @@ class SouthParkNlIE(SouthParkIE):
_TESTS = [{ _TESTS = [{
'url': 'http://www.southpark.nl/full-episodes/s18e06-freemium-isnt-free', 'url': 'http://www.southpark.nl/full-episodes/s18e06-freemium-isnt-free',
'playlist_count': 4, 'info_dict': {
'title': 'Freemium Isn\'t Free',
'description': 'Stan is addicted to the new Terrance and Phillip mobile game.',
},
'playlist_mincount': 3,
}] }]
@@ -74,5 +94,9 @@ class SouthParkDkIE(SouthParkIE):
_TESTS = [{ _TESTS = [{
'url': 'http://www.southparkstudios.dk/full-episodes/s18e07-grounded-vindaloop', 'url': 'http://www.southparkstudios.dk/full-episodes/s18e07-grounded-vindaloop',
'playlist_count': 4, 'info_dict': {
'title': 'Grounded Vindaloop',
'description': 'Butters is convinced he\'s living in a virtual reality.',
},
'playlist_mincount': 3,
}] }]

Some files were not shown because too many files have changed in this diff Show More