Compare commits

..

23 Commits

Author SHA1 Message Date
Sergey M․
a8e687a4da release 2017.03.10 2017-03-10 23:26:28 +07:00
Sergey M․
f9e5c92c94 [ChangeLog] Actualize 2017-03-10 23:23:24 +07:00
Sergey M․
c2ee861c6d [extractor/generic] Make title optional for jwplayer embeds (closes #12410) 2017-03-10 23:16:53 +07:00
Sergey M․
bd34c32bd7 [wdr] Actualize comment 2017-03-10 23:07:36 +07:00
runningbits
f802c48660 [wdr:maus] Fix extraction and update tests 2017-03-10 23:59:32 +08:00
Sergey M․
76bee08fe7 [prosiebensat1] Improve title extraction and add test 2017-03-09 23:42:07 +07:00
Thomas Christlieb
2913821723 [prosiebensat1] Improve title extraction (closes #12318) 2017-03-10 00:18:37 +08:00
Sergey M․
0e7f9a9b48 [dplayit] Relax playback info URL extraction 2017-03-08 21:30:30 +07:00
Sergey M․
0cf2352e85 [dplayit] Separate and rewrite extractor and bypass geo restriction (closes #12393) 2017-03-08 21:20:01 +07:00
Yen Chi Hsuan
0f6b87d067 [miomio] Fix extraction
Closes #12291
Closes #12388
Closes #12402
2017-03-08 19:46:58 +08:00
Sergey M․
d7344d33b1 [telequebec] Fix description extraction and update test (closes #12399) 2017-03-08 18:25:59 +07:00
denneboomyo
b08cc749d6 [openload] Fix extraction 2017-03-08 06:01:27 +08:00
Sergey M․
b68a812ea8 [extractor/generic] Add test for brigthcove UUID-like videoPlayer 2017-03-07 23:00:21 +07:00
Sergey M․
2e76bdc850 [brightcove:legacy] Relax videoPlayer validation check (closes #12381) 2017-03-07 22:59:33 +07:00
Yen Chi Hsuan
fe646a2f10 [twitch] PEP8 2017-03-07 15:34:06 +08:00
Sergey M․
9df53ea36e Credit @puxlit for twitch 2fa (#11974) 2017-03-07 04:05:47 +07:00
Sergey M․
d7d7f84c95 Credit @benages for redbull.tv (#11948) 2017-03-07 04:05:47 +07:00
Sergey M․
dccd0ab35d release 2017.03.07 2017-03-07 03:59:22 +07:00
Sergey M․
80146dcc6c [ChangeLog] Actualize 2017-03-07 03:57:54 +07:00
Sergey M․
e30ccf7047 [soundcloud] Update client id (closes #12376) 2017-03-06 23:05:38 +07:00
Yen Chi Hsuan
54a3a8827b [__init__] Metadata should be added after conversion
Fixes #5594
2017-03-06 18:09:12 +08:00
Yen Chi Hsuan
92cb5763f4 [ChangeLog] Update after #12357 2017-03-06 18:04:19 +08:00
denneboomyo
da92da4b88 Openload fix extraction (#12357)
* Fix extraction
2017-03-06 18:00:17 +08:00
17 changed files with 252 additions and 66 deletions

View File

@@ -6,8 +6,8 @@
--- ---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.03.06*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.03.10*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2017.03.06** - [ ] I've **verified** and **I assure** that I'm running youtube-dl **2017.03.10**
### Before submitting an *issue* make sure you have: ### Before submitting an *issue* make sure you have:
- [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
@@ -35,7 +35,7 @@ $ youtube-dl -v <your command line>
[debug] User config: [] [debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2017.03.06 [debug] youtube-dl version 2017.03.10
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}

View File

@@ -207,3 +207,5 @@ Marek Rusinowski
Tobias Gruetzmacher Tobias Gruetzmacher
Olivier Bilodeau Olivier Bilodeau
Lars Vierbergen Lars Vierbergen
Juanjo Benages
Xiao Di Guan

View File

@@ -1,3 +1,26 @@
version 2017.03.10
Extractors
* [generic] Make title optional for jwplayer embeds (#12410)
* [wdr:maus] Fix extraction (#12373)
* [prosiebensat1] Improve title extraction (#12318, #12327)
* [dplayit] Separate and rewrite extractor and bypass geo restriction (#12393)
* [miomio] Fix extraction (#12291, #12388, #12402)
* [telequebec] Fix description extraction (#12399)
* [openload] Fix extraction (#12357)
* [brightcove:legacy] Relax videoPlayer validation check (#12381)
version 2017.03.07
Core
* Metadata are now added after conversion (#5594)
Extractors
* [soundcloud] Update client id (#12376)
* [openload] Fix extraction (#10408, #12357)
version 2017.03.06 version 2017.03.06
Core Core

View File

@@ -212,6 +212,7 @@
- **Dotsub** - **Dotsub**
- **DouyuTV**: 斗鱼 - **DouyuTV**: 斗鱼
- **DPlay** - **DPlay**
- **DPlayIt**
- **dramafever** - **dramafever**
- **dramafever:series** - **dramafever:series**
- **DRBonanza** - **DRBonanza**

View File

@@ -242,14 +242,11 @@ def _real_main(argv=None):
# PostProcessors # PostProcessors
postprocessors = [] postprocessors = []
# Add the metadata pp first, the other pps will copy it
if opts.metafromtitle: if opts.metafromtitle:
postprocessors.append({ postprocessors.append({
'key': 'MetadataFromTitle', 'key': 'MetadataFromTitle',
'titleformat': opts.metafromtitle 'titleformat': opts.metafromtitle
}) })
if opts.addmetadata:
postprocessors.append({'key': 'FFmpegMetadata'})
if opts.extractaudio: if opts.extractaudio:
postprocessors.append({ postprocessors.append({
'key': 'FFmpegExtractAudio', 'key': 'FFmpegExtractAudio',
@@ -279,6 +276,11 @@ def _real_main(argv=None):
}) })
if not already_have_thumbnail: if not already_have_thumbnail:
opts.writethumbnail = True opts.writethumbnail = True
# FFmpegMetadataPP should be run after FFmpegVideoConvertorPP and
# FFmpegExtractAudioPP as containers before conversion may not support
# metadata (3gp, webm, etc.)
if opts.addmetadata:
postprocessors.append({'key': 'FFmpegMetadata'})
# XAttrMetadataPP should be run after post-processors that may change file # XAttrMetadataPP should be run after post-processors that may change file
# contents # contents
if opts.xattrs: if opts.xattrs:

View File

@@ -193,7 +193,13 @@ class BrightcoveLegacyIE(InfoExtractor):
if videoPlayer is not None: if videoPlayer is not None:
if isinstance(videoPlayer, list): if isinstance(videoPlayer, list):
videoPlayer = videoPlayer[0] videoPlayer = videoPlayer[0]
if not (videoPlayer.isdigit() or videoPlayer.startswith('ref:')): videoPlayer = videoPlayer.strip()
# UUID is also possible for videoPlayer (e.g.
# http://www.popcornflix.com/hoodies-vs-hooligans/7f2d2b87-bbf2-4623-acfb-ea942b4f01dd
# or http://www8.hp.com/cn/zh/home.html)
if not (re.match(
r'^(?:\d+|[\da-fA-F]{8}-?[\da-fA-F]{4}-?[\da-fA-F]{4}-?[\da-fA-F]{4}-?[\da-fA-F]{12})$',
videoPlayer) or videoPlayer.startswith('ref:')):
return None return None
params['@videoPlayer'] = videoPlayer params['@videoPlayer'] = videoPlayer
linkBase = find_param('linkBaseURL') linkBase = find_param('linkBaseURL')

View File

@@ -6,37 +6,24 @@ import re
import time import time
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse from ..compat import (
compat_urlparse,
compat_HTTPError,
)
from ..utils import ( from ..utils import (
USER_AGENTS, USER_AGENTS,
ExtractorError,
int_or_none, int_or_none,
unified_strdate,
remove_end,
update_url_query, update_url_query,
) )
class DPlayIE(InfoExtractor): class DPlayIE(InfoExtractor):
_VALID_URL = r'https?://(?P<domain>it\.dplay\.com|www\.dplay\.(?:dk|se|no))/[^/]+/(?P<id>[^/?#]+)' _VALID_URL = r'https?://(?P<domain>www\.dplay\.(?:dk|se|no))/[^/]+/(?P<id>[^/?#]+)'
_TESTS = [{ _TESTS = [{
# geo restricted, via direct unsigned hls URL
'url': 'http://it.dplay.com/take-me-out/stagione-1-episodio-25/',
'info_dict': {
'id': '1255600',
'display_id': 'stagione-1-episodio-25',
'ext': 'mp4',
'title': 'Episodio 25',
'description': 'md5:cae5f40ad988811b197d2d27a53227eb',
'duration': 2761,
'timestamp': 1454701800,
'upload_date': '20160205',
'creator': 'RTIT',
'series': 'Take me out',
'season_number': 1,
'episode_number': 25,
'age_limit': 0,
},
'expected_warnings': ['Unable to download f4m manifest'],
}, {
# non geo restricted, via secure api, unsigned download hls URL # non geo restricted, via secure api, unsigned download hls URL
'url': 'http://www.dplay.se/nugammalt-77-handelser-som-format-sverige/season-1-svensken-lar-sig-njuta-av-livet/', 'url': 'http://www.dplay.se/nugammalt-77-handelser-som-format-sverige/season-1-svensken-lar-sig-njuta-av-livet/',
'info_dict': { 'info_dict': {
@@ -168,3 +155,90 @@ class DPlayIE(InfoExtractor):
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
} }
class DPlayItIE(InfoExtractor):
_VALID_URL = r'https?://it\.dplay\.com/[^/]+/[^/]+/(?P<id>[^/?#]+)'
_GEO_COUNTRIES = ['IT']
_TEST = {
'url': 'http://it.dplay.com/nove/biografie-imbarazzanti/luigi-di-maio-la-psicosi-di-stanislawskij/',
'md5': '2b808ffb00fc47b884a172ca5d13053c',
'info_dict': {
'id': '6918',
'display_id': 'luigi-di-maio-la-psicosi-di-stanislawskij',
'ext': 'mp4',
'title': 'Biografie imbarazzanti: Luigi Di Maio: la psicosi di Stanislawskij',
'description': 'md5:3c7a4303aef85868f867a26f5cc14813',
'thumbnail': r're:^https?://.*\.jpe?g',
'upload_date': '20160524',
'series': 'Biografie imbarazzanti',
'season_number': 1,
'episode': 'Luigi Di Maio: la psicosi di Stanislawskij',
'episode_number': 1,
},
}
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
info_url = self._search_regex(
r'url\s*:\s*["\']((?:https?:)?//[^/]+/playback/videoPlaybackInfo/\d+)',
webpage, 'video id')
title = remove_end(self._og_search_title(webpage), ' | Dplay')
try:
info = self._download_json(
info_url, display_id, headers={
'Authorization': 'Bearer %s' % self._get_cookies(url).get(
'dplayit_token').value,
'Referer': url,
})
except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code in (400, 403):
info = self._parse_json(e.cause.read().decode('utf-8'), display_id)
error = info['errors'][0]
if error.get('code') == 'access.denied.geoblocked':
self.raise_geo_restricted(
msg=error.get('detail'), countries=self._GEO_COUNTRIES)
raise ExtractorError(info['errors'][0]['detail'], expected=True)
raise
hls_url = info['data']['attributes']['streaming']['hls']['url']
formats = self._extract_m3u8_formats(
hls_url, display_id, ext='mp4', entry_protocol='m3u8_native',
m3u8_id='hls')
series = self._html_search_regex(
r'(?s)<h1[^>]+class=["\'].*?\bshow_title\b.*?["\'][^>]*>(.+?)</h1>',
webpage, 'series', fatal=False)
episode = self._search_regex(
r'<p[^>]+class=["\'].*?\bdesc_ep\b.*?["\'][^>]*>\s*<br/>\s*<b>([^<]+)',
webpage, 'episode', fatal=False)
mobj = re.search(
r'(?s)<span[^>]+class=["\']dates["\'][^>]*>.+?\bS\.(?P<season_number>\d+)\s+E\.(?P<episode_number>\d+)\s*-\s*(?P<upload_date>\d{2}/\d{2}/\d{4})',
webpage)
if mobj:
season_number = int(mobj.group('season_number'))
episode_number = int(mobj.group('episode_number'))
upload_date = unified_strdate(mobj.group('upload_date'))
else:
season_number = episode_number = upload_date = None
return {
'id': info_url.rpartition('/')[-1],
'display_id': display_id,
'title': title,
'description': self._og_search_description(webpage),
'thumbnail': self._og_search_thumbnail(webpage),
'series': series,
'season_number': season_number,
'episode': episode,
'episode_number': episode_number,
'upload_date': upload_date,
'formats': formats,
}

View File

@@ -246,7 +246,10 @@ from .dfb import DFBIE
from .dhm import DHMIE from .dhm import DHMIE
from .dotsub import DotsubIE from .dotsub import DotsubIE
from .douyutv import DouyuTVIE from .douyutv import DouyuTVIE
from .dplay import DPlayIE from .dplay import (
DPlayIE,
DPlayItIE,
)
from .dramafever import ( from .dramafever import (
DramaFeverIE, DramaFeverIE,
DramaFeverSeriesIE, DramaFeverSeriesIE,

View File

@@ -449,6 +449,23 @@ class GenericIE(InfoExtractor):
}, },
}], }],
}, },
{
# Brightcove with UUID in videoPlayer
'url': 'http://www8.hp.com/cn/zh/home.html',
'info_dict': {
'id': '5255815316001',
'ext': 'mp4',
'title': 'Sprocket Video - China',
'description': 'Sprocket Video - China',
'uploader': 'HP-Video Gallery',
'timestamp': 1482263210,
'upload_date': '20161220',
'uploader_id': '1107601872001',
},
'params': {
'skip_download': True, # m3u8 download
},
},
# ooyala video # ooyala video
{ {
'url': 'http://www.rollingstone.com/music/videos/norwegian-dj-cashmere-cat-goes-spartan-on-with-me-premiere-20131219', 'url': 'http://www.rollingstone.com/music/videos/norwegian-dj-cashmere-cat-goes-spartan-on-with-me-premiere-20131219',
@@ -2533,7 +2550,10 @@ class GenericIE(InfoExtractor):
try: try:
jwplayer_data = self._parse_json( jwplayer_data = self._parse_json(
jwplayer_data_str, video_id, transform_source=js_to_json) jwplayer_data_str, video_id, transform_source=js_to_json)
return self._parse_jwplayer_data(jwplayer_data, video_id) info = self._parse_jwplayer_data(
jwplayer_data, video_id, require_title=False)
if not info.get('title'):
info['title'] = video_title
except ExtractorError: except ExtractorError:
pass pass

View File

@@ -51,6 +51,7 @@ class MioMioIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'マツコの知らない世界【劇的進化SPビニール傘冷凍食品2016】 1_2 - 16 05 31', 'title': 'マツコの知らない世界【劇的進化SPビニール傘冷凍食品2016】 1_2 - 16 05 31',
}, },
'skip': 'Unable to load videos',
}] }]
def _extract_mioplayer(self, webpage, video_id, title, http_headers): def _extract_mioplayer(self, webpage, video_id, title, http_headers):
@@ -94,9 +95,18 @@ class MioMioIE(InfoExtractor):
return entries return entries
def _download_chinese_webpage(self, *args, **kwargs):
# Requests with English locales return garbage
headers = {
'Accept-Language': 'zh-TW,en-US;q=0.7,en;q=0.3',
}
kwargs.setdefault('headers', {}).update(headers)
return self._download_webpage(*args, **kwargs)
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_chinese_webpage(
url, video_id)
title = self._html_search_meta( title = self._html_search_meta(
'description', webpage, 'title', fatal=True) 'description', webpage, 'title', fatal=True)
@@ -106,7 +116,7 @@ class MioMioIE(InfoExtractor):
if '_h5' in mioplayer_path: if '_h5' in mioplayer_path:
player_url = compat_urlparse.urljoin(url, mioplayer_path) player_url = compat_urlparse.urljoin(url, mioplayer_path)
player_webpage = self._download_webpage( player_webpage = self._download_chinese_webpage(
player_url, video_id, player_url, video_id,
note='Downloading player webpage', headers={'Referer': url}) note='Downloading player webpage', headers={'Referer': url})
entries = self._parse_html5_media_entries(player_url, player_webpage, video_id) entries = self._parse_html5_media_entries(player_url, player_webpage, video_id)

View File

@@ -75,22 +75,38 @@ class OpenloadIE(InfoExtractor):
'<span[^>]+id="[^"]+"[^>]*>([0-9A-Za-z]+)</span>', '<span[^>]+id="[^"]+"[^>]*>([0-9A-Za-z]+)</span>',
webpage, 'openload ID') webpage, 'openload ID')
first_char = int(ol_id[0]) video_url_chars = []
urlcode = []
num = 1
while num < len(ol_id): first_char = ord(ol_id[0])
i = ord(ol_id[num]) key = first_char - 50
key = 0 maxKey = max(2, key)
if i <= 90: key = min(maxKey, len(ol_id) - 22)
key = i - 65 t = ol_id[key:key + 20]
elif i >= 97:
key = 25 + i - 97
urlcode.append((key, compat_chr(int(ol_id[num + 2:num + 5]) // int(ol_id[num + 1]) - first_char)))
num += 5
video_url = 'https://openload.co/stream/' + ''.join( hashMap = {}
[value for _, value in sorted(urlcode, key=lambda x: x[0])]) v = ol_id.replace(t, "")
h = 0
while h < len(t):
f = t[h:h + 2]
i = int(f, 16)
hashMap[h / 2] = i
h += 2
h = 0
while h < len(v):
B = v[h:h + 2]
i = int(B, 16)
index = (h / 2) % 10
A = hashMap[index]
i = i ^ 137
i = i ^ A
video_url_chars.append(compat_chr(i))
h += 2
video_url = 'https://openload.co/stream/%s?mime=true'
video_url = video_url % (''.join(video_url_chars))
title = self._og_search_title(webpage, default=None) or self._search_regex( title = self._og_search_title(webpage, default=None) or self._search_regex(
r'<span[^>]+class=["\']title["\'][^>]*>([^<]+)', webpage, r'<span[^>]+class=["\']title["\'][^>]*>([^<]+)', webpage,

View File

@@ -300,6 +300,21 @@ class ProSiebenSat1IE(ProSiebenSat1BaseIE):
'skip_download': True, 'skip_download': True,
}, },
}, },
{
# title in <h2 class="subtitle">
'url': 'http://www.prosieben.de/stars/oscar-award/videos/jetzt-erst-enthuellt-das-geheimnis-von-emma-stones-oscar-robe-clip',
'info_dict': {
'id': '4895826',
'ext': 'mp4',
'title': 'Jetzt erst enthüllt: Das Geheimnis von Emma Stones Oscar-Robe',
'description': 'md5:e5ace2bc43fadf7b63adc6187e9450b9',
'upload_date': '20170302',
},
'params': {
'skip_download': True,
},
'skip': 'geo restricted to Germany',
},
{ {
# geo restricted to Germany # geo restricted to Germany
'url': 'http://www.kabeleinsdoku.de/tv/mayday-alarm-im-cockpit/video/102-notlandung-im-hudson-river-ganze-folge', 'url': 'http://www.kabeleinsdoku.de/tv/mayday-alarm-im-cockpit/video/102-notlandung-im-hudson-river-ganze-folge',
@@ -338,6 +353,7 @@ class ProSiebenSat1IE(ProSiebenSat1BaseIE):
r'<header class="module_header">\s*<h2>([^<]+)</h2>\s*</header>', r'<header class="module_header">\s*<h2>([^<]+)</h2>\s*</header>',
r'<h2 class="video-title" itemprop="name">\s*(.+?)</h2>', r'<h2 class="video-title" itemprop="name">\s*(.+?)</h2>',
r'<div[^>]+id="veeseoTitle"[^>]*>(.+?)</div>', r'<div[^>]+id="veeseoTitle"[^>]*>(.+?)</div>',
r'<h2[^>]+class="subtitle"[^>]*>([^<]+)</h2>',
] ]
_DESCRIPTION_REGEXES = [ _DESCRIPTION_REGEXES = [
r'<p itemprop="description">\s*(.+?)</p>', r'<p itemprop="description">\s*(.+?)</p>',
@@ -369,7 +385,9 @@ class ProSiebenSat1IE(ProSiebenSat1BaseIE):
def _extract_clip(self, url, webpage): def _extract_clip(self, url, webpage):
clip_id = self._html_search_regex( clip_id = self._html_search_regex(
self._CLIPID_REGEXES, webpage, 'clip id') self._CLIPID_REGEXES, webpage, 'clip id')
title = self._html_search_regex(self._TITLE_REGEXES, webpage, 'title') title = self._html_search_regex(
self._TITLE_REGEXES, webpage, 'title',
default=None) or self._og_search_title(webpage)
info = self._extract_video_info(url, clip_id) info = self._extract_video_info(url, clip_id)
description = self._html_search_regex( description = self._html_search_regex(
self._DESCRIPTION_REGEXES, webpage, 'description', default=None) self._DESCRIPTION_REGEXES, webpage, 'description', default=None)

View File

@@ -121,7 +121,7 @@ class SoundcloudIE(InfoExtractor):
}, },
] ]
_CLIENT_ID = 'fDoItMDbsbZz8dY16ZzARCZmzgHBPotA' _CLIENT_ID = '2t9loNQH90kzJcsFCODdigxfp325aq4z'
_IPHONE_CLIENT_ID = '376f225bf427445fc4bfb6b99b72e0bf' _IPHONE_CLIENT_ID = '376f225bf427445fc4bfb6b99b72e0bf'
@staticmethod @staticmethod

View File

@@ -2,15 +2,17 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
smuggle_url, smuggle_url,
try_get,
) )
class TeleQuebecIE(InfoExtractor): class TeleQuebecIE(InfoExtractor):
_VALID_URL = r'https?://zonevideo\.telequebec\.tv/media/(?P<id>\d+)' _VALID_URL = r'https?://zonevideo\.telequebec\.tv/media/(?P<id>\d+)'
_TEST = { _TESTS = [{
'url': 'http://zonevideo.telequebec.tv/media/20984/le-couronnement-de-new-york/couronnement-de-new-york', 'url': 'http://zonevideo.telequebec.tv/media/20984/le-couronnement-de-new-york/couronnement-de-new-york',
'md5': 'fe95a0957e5707b1b01f5013e725c90f', 'md5': 'fe95a0957e5707b1b01f5013e725c90f',
'info_dict': { 'info_dict': {
@@ -18,10 +20,14 @@ class TeleQuebecIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Le couronnement de New York', 'title': 'Le couronnement de New York',
'description': 'md5:f5b3d27a689ec6c1486132b2d687d432', 'description': 'md5:f5b3d27a689ec6c1486132b2d687d432',
'upload_date': '20160220', 'upload_date': '20170201',
'timestamp': 1455965438, 'timestamp': 1485972222,
} }
} }, {
# no description
'url': 'http://zonevideo.telequebec.tv/media/30261',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
media_id = self._match_id(url) media_id = self._match_id(url)
@@ -31,9 +37,13 @@ class TeleQuebecIE(InfoExtractor):
return { return {
'_type': 'url_transparent', '_type': 'url_transparent',
'id': media_id, 'id': media_id,
'url': smuggle_url('limelight:media:' + media_data['streamInfo']['sourceId'], {'geo_countries': ['CA']}), 'url': smuggle_url(
'limelight:media:' + media_data['streamInfo']['sourceId'],
{'geo_countries': ['CA']}),
'title': media_data['title'], 'title': media_data['title'],
'description': media_data.get('descriptions', [{'text': None}])[0].get('text'), 'description': try_get(
'duration': int_or_none(media_data.get('durationInMilliseconds'), 1000), media_data, lambda x: x['descriptions'][0]['text'], compat_str),
'duration': int_or_none(
media_data.get('durationInMilliseconds'), 1000),
'ie_key': 'LimelightMedia', 'ie_key': 'LimelightMedia',
} }

View File

@@ -104,7 +104,7 @@ class TwitchBaseIE(InfoExtractor):
login_page, handle, 'Logging in as %s' % username, { login_page, handle, 'Logging in as %s' % username, {
'username': username, 'username': username,
'password': password, 'password': password,
}) })
if re.search(r'(?i)<form[^>]+id="two-factor-submit"', redirect_page) is not None: if re.search(r'(?i)<form[^>]+id="two-factor-submit"', redirect_page) is not None:
# TODO: Add mechanism to request an SMS or phone call # TODO: Add mechanism to request an SMS or phone call

View File

@@ -19,9 +19,10 @@ class WDRBaseIE(InfoExtractor):
def _extract_wdr_video(self, webpage, display_id): def _extract_wdr_video(self, webpage, display_id):
# for wdr.de the data-extension is in a tag with the class "mediaLink" # for wdr.de the data-extension is in a tag with the class "mediaLink"
# for wdr.de radio players, in a tag with the class "wdrrPlayerPlayBtn" # for wdr.de radio players, in a tag with the class "wdrrPlayerPlayBtn"
# for wdrmaus its in a link to the page in a multiline "videoLink"-tag # for wdrmaus, in a tag with the class "videoButton" (previously a link
# to the page in a multiline "videoLink"-tag)
json_metadata = self._html_search_regex( json_metadata = self._html_search_regex(
r'class=(?:"(?:mediaLink|wdrrPlayerPlayBtn)\b[^"]*"[^>]+|"videoLink\b[^"]*"[\s]*>\n[^\n]*)data-extension="([^"]+)"', r'class=(?:"(?:mediaLink|wdrrPlayerPlayBtn|videoButton)\b[^"]*"[^>]+|"videoLink\b[^"]*"[\s]*>\n[^\n]*)data-extension="([^"]+)"',
webpage, 'media link', default=None, flags=re.MULTILINE) webpage, 'media link', default=None, flags=re.MULTILINE)
if not json_metadata: if not json_metadata:
@@ -32,7 +33,7 @@ class WDRBaseIE(InfoExtractor):
jsonp_url = media_link_obj['mediaObj']['url'] jsonp_url = media_link_obj['mediaObj']['url']
metadata = self._download_json( metadata = self._download_json(
jsonp_url, 'metadata', transform_source=strip_jsonp) jsonp_url, display_id, transform_source=strip_jsonp)
metadata_tracker_data = metadata['trackerData'] metadata_tracker_data = metadata['trackerData']
metadata_media_resource = metadata['mediaResource'] metadata_media_resource = metadata['mediaResource']
@@ -161,23 +162,23 @@ class WDRIE(WDRBaseIE):
{ {
'url': 'http://www.wdrmaus.de/aktuelle-sendung/index.php5', 'url': 'http://www.wdrmaus.de/aktuelle-sendung/index.php5',
'info_dict': { 'info_dict': {
'id': 'mdb-1096487', 'id': 'mdb-1323501',
'ext': 'flv', 'ext': 'mp4',
'upload_date': 're:^[0-9]{8}$', 'upload_date': 're:^[0-9]{8}$',
'title': 're:^Die Sendung mit der Maus vom [0-9.]{10}$', 'title': 're:^Die Sendung mit der Maus vom [0-9.]{10}$',
'description': '- Die Sendung mit der Maus -', 'description': 'Die Seite mit der Maus -',
}, },
'skip': 'The id changes from week to week because of the new episode' 'skip': 'The id changes from week to week because of the new episode'
}, },
{ {
'url': 'http://www.wdrmaus.de/sachgeschichten/sachgeschichten/achterbahn.php5', 'url': 'http://www.wdrmaus.de/filme/sachgeschichten/achterbahn.php5',
'md5': '803138901f6368ee497b4d195bb164f2', 'md5': '803138901f6368ee497b4d195bb164f2',
'info_dict': { 'info_dict': {
'id': 'mdb-186083', 'id': 'mdb-186083',
'ext': 'mp4', 'ext': 'mp4',
'upload_date': '20130919', 'upload_date': '20130919',
'title': 'Sachgeschichte - Achterbahn ', 'title': 'Sachgeschichte - Achterbahn ',
'description': '- Die Sendung mit der Maus -', 'description': 'Die Seite mit der Maus -',
}, },
}, },
{ {
@@ -186,7 +187,7 @@ class WDRIE(WDRBaseIE):
'info_dict': { 'info_dict': {
'id': 'mdb-869971', 'id': 'mdb-869971',
'ext': 'flv', 'ext': 'flv',
'title': 'Funkhaus Europa Livestream', 'title': 'COSMO Livestream',
'description': 'md5:2309992a6716c347891c045be50992e4', 'description': 'md5:2309992a6716c347891c045be50992e4',
'upload_date': '20160101', 'upload_date': '20160101',
}, },

View File

@@ -1,3 +1,3 @@
from __future__ import unicode_literals from __future__ import unicode_literals
__version__ = '2017.03.06' __version__ = '2017.03.10'