Compare commits

...

78 Commits

Author SHA1 Message Date
Philipp Hagemeister
7c8ea53b96 release 2014.11.26.1 2014-11-26 22:01:06 +01:00
Philipp Hagemeister
dcddc10a50 [test_unicode_literals] Arm unicode_literals check
From now on, the line

from __future__ import unicode_literals

should be contained in every single Python file lest we run into any more 2.x/3.x issues.
Going forward, we're likely to develop on 3.x only and would likely miss subtle bugs otherwise.
2014-11-26 20:01:22 +01:00
Sergey M․
a1008af412 [gorillavid] Update IE_DESC 2014-11-27 00:24:19 +06:00
Sergey M․
61c0663c1e [udemy] Generalize download json and fix login 2014-11-26 21:25:43 +06:00
Sergey M․
81a7a521c5 [gorillavid] Remove unused import 2014-11-26 21:02:46 +06:00
Sergey M․
e293711802 [udemy] Set session cookies to API requests (Closes #4124, closes #4219, closes #4308) 2014-11-26 21:00:18 +06:00
Sergey M․
ceb3367320 [gorillavid] Generalize extraction with countdown timeout and support faststream.in (Closes #4297) 2014-11-26 20:02:40 +06:00
Philipp Hagemeister
a03aaaed2e Declare Python 3.2 compatibility 2014-11-26 13:08:42 +01:00
Philipp Hagemeister
e075a44afb [tests] Remove useless u prefixes 2014-11-26 13:07:32 +01:00
Philipp Hagemeister
8865bdeb37 Remove useless u prefixes 2014-11-26 13:06:02 +01:00
Philipp Hagemeister
3aa578cad2 [ffmpeg] Modernize 2014-11-26 13:05:49 +01:00
Philipp Hagemeister
d3b5101a91 [videopremium] Modernize 2014-11-26 13:03:22 +01:00
Philipp Hagemeister
5c32110114 [videofyme] Modernize 2014-11-26 13:01:39 +01:00
Philipp Hagemeister
24144e3b8d [tvp] Modernize 2014-11-26 12:58:53 +01:00
Philipp Hagemeister
b3034f9df7 [trilulilu] Modernize 2014-11-26 12:56:43 +01:00
Philipp Hagemeister
4c6d2ff8dc [sohu] Modernize 2014-11-26 12:53:55 +01:00
Philipp Hagemeister
faf3494894 [redtube] Modernize 2014-11-26 12:52:45 +01:00
Philipp Hagemeister
535a66ef66 [muzu] Modernize 2014-11-26 12:50:37 +01:00
Philipp Hagemeister
5c40bba82f [hotnewhiphop] Modernize 2014-11-26 12:45:40 +01:00
Philipp Hagemeister
855dc479c2 [subtitles] Modernize 2014-11-26 12:43:06 +01:00
Philipp Hagemeister
0792d5634e [youtube] Remove useless u prefixes 2014-11-26 12:41:53 +01:00
Philipp Hagemeister
e91cdcae1a [appletrailers] Modernize 2014-11-26 12:41:24 +01:00
Philipp Hagemeister
27e1400f55 [aparat] Modernize 2014-11-26 12:40:51 +01:00
Philipp Hagemeister
e0938e7731 [addanime] Modernize 2014-11-26 12:40:05 +01:00
Philipp Hagemeister
b72823a0a4 [francetv] PEP8 2014-11-26 12:38:20 +01:00
Philipp Hagemeister
673cf0e773 [update] Remove useless import 2014-11-26 12:37:45 +01:00
Philipp Hagemeister
f8aace93cd [academicearth] Modernize 2014-11-26 12:35:57 +01:00
Philipp Hagemeister
80310134e0 [mplayer] Modernize 2014-11-26 12:34:52 +01:00
Philipp Hagemeister
4d2d638df4 [http] Modernize 2014-11-26 12:27:36 +01:00
Philipp Hagemeister
0e44f90e18 [hls] Remove useless u porefixes 2014-11-26 12:26:21 +01:00
Philipp Hagemeister
15938ab67a [update] Modernize 2014-11-26 12:24:57 +01:00
Philipp Hagemeister
ab4ee31eb1 [utils] remove useless u prefix 2014-11-26 11:50:22 +01:00
Philipp Hagemeister
b061ea6e9f [compat] Beautify assertion 2014-11-26 11:48:09 +01:00
Philipp Hagemeister
4aae94f9d0 [YoutubeDL] Remove incorrect documentation 2014-11-26 11:25:43 +01:00
Philipp Hagemeister
acda92f6bc Clarify --no-playlist documentation (Closes #4309) 2014-11-26 10:51:03 +01:00
Philipp Hagemeister
ddfd0f2727 release 2014.11.26 2014-11-26 10:46:12 +01:00
Philipp Hagemeister
d0720e7118 Merge branch 'master' of github.com:rg3/youtube-dl 2014-11-26 10:45:57 +01:00
Philipp Hagemeister
4e262a8838 [generic] Detect direct video links (Fixes #4149, #4313) 2014-11-26 10:44:39 +01:00
Sergey M․
b9ed3af343 [tass] Add extractor (Closes #4296) 2014-11-25 22:24:33 +06:00
Philipp Hagemeister
63c9b2c1d9 release 2014.11.25.1 2014-11-25 14:34:29 +01:00
Philipp Hagemeister
65f3a228b1 [generic] Add support for LazyYT embeds (Fixes #4306) 2014-11-25 14:34:19 +01:00
Philipp Hagemeister
3004ae2c3a Credit @t0mm0 for xminus (#4302) 2014-11-25 12:16:48 +01:00
Philipp Hagemeister
d9836a5917 release 2014.11.25 2014-11-25 09:56:52 +01:00
Philipp Hagemeister
be64b5b098 [xminus] Simplify and extend (#4302) 2014-11-25 09:54:54 +01:00
Philipp Hagemeister
c3e74731c2 [README] Mention _og_search_description (#4304)
Lots of sites do have this meta tag, so just add it to the example.
2014-11-25 09:36:27 +01:00
Philipp Hagemeister
c920d7f00d [README] Adapt code to new style
Next to every IE will download the webpage first anyways.
2014-11-25 09:23:46 +01:00
Philipp Hagemeister
0bbf12239c Merge remote-tracking branch 't0mm0/x-minus' 2014-11-25 09:22:33 +01:00
Philipp Hagemeister
70d68eb46f Credit @MatthewRayfield for tmz (#4304) 2014-11-25 09:17:59 +01:00
Philipp Hagemeister
c553fe5d29 [tmz] Simplify (#4304) 2014-11-25 09:16:40 +01:00
Matthew Rayfield
f0c3d729d7 [tmz] Add new extractor 2014-11-25 02:54:13 -05:00
t0mm0
1cdedfee10 [XMinus] Added new extractor. 2014-11-25 03:25:28 +00:00
Philipp Hagemeister
93129d9442 release 2014.11.24 2014-11-24 22:56:43 +01:00
Philipp Hagemeister
e8c8653e9d Merge remote-tracking branch 'origin/master' 2014-11-24 22:52:04 +01:00
Philipp Hagemeister
fab89c67c5 Credit @ossi96 for bpb (#4298) 2014-11-24 22:47:49 +01:00
Philipp Hagemeister
3d960a22fa [bpb] Simplify (#4298) 2014-11-24 22:47:23 +01:00
Philipp Hagemeister
51bbb084d3 Merge remote-tracking branch 'ossi96/bpb' 2014-11-24 22:42:56 +01:00
Naglis Jonaitis
2c25a2bd29 [tunein] Add new extractor (Closes #4097) 2014-11-24 23:15:33 +02:00
Oskar Jauch
355682be01 bpb Add new extractor 2014-11-24 20:02:00 +01:00
Jaime Marquínez Ferrándiz
00e9d396ab [francetv] Use the m3u8 manifest for georestricted videos (closes #3963)
Generating the correct urls for the f4m segments seems to require a lot of work.
Also raise an error if the video is not available from your location.
2014-11-24 19:49:43 +01:00
Philipp Hagemeister
14d4e90eb1 [downloader/__init__] Define proper __all__ 2014-11-23 22:25:12 +01:00
Philipp Hagemeister
b74e86f48a Fix all PEP8 issues except E501 2014-11-23 22:21:46 +01:00
Philipp Hagemeister
3d36cea4ac [vk] PEP8 2014-11-23 22:14:27 +01:00
Philipp Hagemeister
380b822003 Remove outdated transition helper scripts 2014-11-23 22:13:03 +01:00
Philipp Hagemeister
b66e699877 [myspace] pep8 and modernization 2014-11-23 22:12:18 +01:00
Philipp Hagemeister
27f8b0994e Merge remote-tracking branch 'jtwaleson/master' 2014-11-23 22:10:26 +01:00
Philipp Hagemeister
e311b6389a Credit @daohoangson for zingmp3 (#4288) 2014-11-23 22:01:15 +01:00
Jouke Waleson
fab6d4c048 remove useless line, the result is never used 2014-11-23 22:00:35 +01:00
Philipp Hagemeister
4ffc31033e [zingmp3] Simplify and PEP8 (#4288) 2014-11-23 22:00:25 +01:00
Philipp Hagemeister
c1777d5cb3 Merge remote-tracking branch 'daohoangson/zing-mp3' 2014-11-23 21:55:51 +01:00
Jouke Waleson
9e1a5b8455 PEP8: applied even more rules 2014-11-23 21:39:15 +01:00
Philipp Hagemeister
784b6d3a9b Merge remote-tracking branch 'jtwaleson/master' 2014-11-23 21:33:31 +01:00
Dao Hoang Son
c66bdc4869 [zingmp3] Added support for songs and albums 2014-11-24 03:25:47 +07:00
Jouke Waleson
2514d2635e PEP8: E225,E227 2014-11-23 21:23:05 +01:00
Jouke Waleson
8bcc875676 PEP8: more applied 2014-11-23 21:20:46 +01:00
Jouke Waleson
5f6a1245ff PEP8 applied 2014-11-23 20:41:03 +01:00
Philipp Hagemeister
f3a3407226 [youtube] Clarify keywords 2014-11-23 20:09:10 +01:00
Sergey M․
598c218f7b [smotri] Adapt to new API and modernize 2014-11-23 23:53:41 +06:00
Naglis Jonaitis
4698b14b76 [rtlxl] Strip additional dot from video URL (#4115) 2014-11-23 13:28:09 +02:00
233 changed files with 1785 additions and 1295 deletions

View File

@@ -84,3 +84,7 @@ xantares
Jan Matějka Jan Matějka
Mauroy Sébastien Mauroy Sébastien
William Sewell William Sewell
Dao Hoang Son
Oskar Jauch
Matthew Rayfield
t0mm0

View File

@@ -30,7 +30,7 @@ Alternatively, refer to the developer instructions below for how to check out an
# DESCRIPTION # DESCRIPTION
**youtube-dl** is a small command-line program to download videos from **youtube-dl** is a small command-line program to download videos from
YouTube.com and a few more sites. It requires the Python interpreter, version YouTube.com and a few more sites. It requires the Python interpreter, version
2.6, 2.7, or 3.3+, and it is not platform specific. It should work on 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on
your Unix box, on Windows or on Mac OS X. It is released to the public domain, your Unix box, on Windows or on Mac OS X. It is released to the public domain,
which means you can modify it, redistribute it or use it however you like. which means you can modify it, redistribute it or use it however you like.
@@ -93,7 +93,8 @@ which means you can modify it, redistribute it or use it however you like.
COUNT views COUNT views
--max-views COUNT Do not download any videos with more than --max-views COUNT Do not download any videos with more than
COUNT views COUNT views
--no-playlist download only the currently playing video --no-playlist If the URL refers to a video and a
playlist, download only the video.
--age-limit YEARS download only videos suitable for the given --age-limit YEARS download only videos suitable for the given
age age
--download-archive FILE Download only videos not listed in the --download-archive FILE Download only videos not listed in the
@@ -492,14 +493,15 @@ If you want to add support for a new site, you can follow this quick list (assum
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
# TODO more code goes here, for example ... # TODO more code goes here, for example ...
webpage = self._download_webpage(url, video_id)
title = self._html_search_regex(r'<h1>(.*?)</h1>', webpage, 'title') title = self._html_search_regex(r'<h1>(.*?)</h1>', webpage, 'title')
return { return {
'id': video_id, 'id': video_id,
'title': title, 'title': title,
'description': self._og_search_description(webpage),
# TODO more properties (see youtube_dl/extractor/common.py) # TODO more properties (see youtube_dl/extractor/common.py)
} }
``` ```

View File

@@ -1,4 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
import os import os
from os.path import dirname as dirn from os.path import dirname as dirn
import sys import sys
@@ -9,6 +11,7 @@ import youtube_dl
BASH_COMPLETION_FILE = "youtube-dl.bash-completion" BASH_COMPLETION_FILE = "youtube-dl.bash-completion"
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in" BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
def build_completion(opt_parser): def build_completion(opt_parser):
opts_flag = [] opts_flag = []
for group in opt_parser.option_groups: for group in opt_parser.option_groups:

View File

@@ -233,6 +233,7 @@ def rmtree(path):
#============================================================================== #==============================================================================
class BuildError(Exception): class BuildError(Exception):
def __init__(self, output, code=500): def __init__(self, output, code=500):
self.output = output self.output = output

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
""" """
This script employs a VERY basic heuristic ('porn' in webpage.lower()) to check This script employs a VERY basic heuristic ('porn' in webpage.lower()) to check

View File

@@ -23,13 +23,13 @@ EXTRA_ARGS = {
'batch-file': ['--require-parameter'], 'batch-file': ['--require-parameter'],
} }
def build_completion(opt_parser): def build_completion(opt_parser):
commands = [] commands = []
for group in opt_parser.option_groups: for group in opt_parser.option_groups:
for option in group.option_list: for option in group.option_list:
long_option = option.get_opt_string().strip('-') long_option = option.get_opt_string().strip('-')
help_msg = shell_quote([option.help])
complete_cmd = ['complete', '--command', 'youtube-dl', '--long-option', long_option] complete_cmd = ['complete', '--command', 'youtube-dl', '--long-option', long_option]
if option._short_opts: if option._short_opts:
complete_cmd += ['--short-option', option._short_opts[0].strip('-')] complete_cmd += ['--short-option', option._short_opts[0].strip('-')]

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
import json import json
import sys import sys

View File

@@ -1,8 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
import hashlib import hashlib
import shutil
import subprocess
import tempfile
import urllib.request import urllib.request
import json import json

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals, with_statement
import rsa import rsa
import json import json
@@ -29,4 +30,5 @@ signature = hexlify(rsa.pkcs1.sign(json.dumps(versions_info, sort_keys=True).enc
print('signature: ' + signature) print('signature: ' + signature)
versions_info['signature'] = signature versions_info['signature'] = signature
json.dump(versions_info, open('update/versions.json', 'w'), indent=4, sort_keys=True) with open('update/versions.json', 'w') as versionsf:
json.dump(versions_info, versionsf, indent=4, sort_keys=True)

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# coding: utf-8 # coding: utf-8
from __future__ import with_statement from __future__ import with_statement, unicode_literals
import datetime import datetime
import glob import glob
@@ -13,7 +13,7 @@ year = str(datetime.datetime.now().year)
for fn in glob.glob('*.html*'): for fn in glob.glob('*.html*'):
with io.open(fn, encoding='utf-8') as f: with io.open(fn, encoding='utf-8') as f:
content = f.read() content = f.read()
newc = re.sub(u'(?P<copyright>Copyright © 2006-)(?P<year>[0-9]{4})', u'Copyright © 2006-' + year, content) newc = re.sub(r'(?P<copyright>Copyright © 2006-)(?P<year>[0-9]{4})', 'Copyright © 2006-' + year, content)
if content != newc: if content != newc:
tmpFn = fn + '.part' tmpFn = fn + '.part'
with io.open(tmpFn, 'wt', encoding='utf-8') as outf: with io.open(tmpFn, 'wt', encoding='utf-8') as outf:

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
import datetime import datetime
import io import io
@@ -73,4 +74,3 @@ atom_template = atom_template.replace('@ENTRIES@', entries_str)
with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file: with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file:
atom_file.write(atom_template) atom_file.write(atom_template)

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
import sys import sys
import os import os
@@ -9,6 +10,7 @@ sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(
import youtube_dl import youtube_dl
def main(): def main():
with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf: with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf:
template = tmplf.read() template = tmplf.read()
@@ -21,7 +23,7 @@ def main():
continue continue
elif ie_desc is not None: elif ie_desc is not None:
ie_html += ': {}'.format(ie.IE_DESC) ie_html += ': {}'.format(ie.IE_DESC)
if ie.working() == False: if not ie.working():
ie_html += ' (Currently broken)' ie_html += ' (Currently broken)'
ie_htmls.append('<li>{}</li>'.format(ie_html)) ie_htmls.append('<li>{}</li>'.format(ie_html))

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
import io import io
import sys import sys
import re import re

View File

@@ -1,3 +1,4 @@
from __future__ import unicode_literals
import io import io
import os.path import os.path

View File

@@ -1,40 +0,0 @@
#!/usr/bin/env python
import sys, os
try:
import urllib.request as compat_urllib_request
except ImportError: # Python 2
import urllib2 as compat_urllib_request
sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n')
try:
raw_input()
except NameError: # Python 3
input()
filename = sys.argv[0]
API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl"
if not os.access(filename, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % filename)
try:
urlh = compat_urllib_request.urlopen(BIN_URL)
newcontent = urlh.read()
urlh.close()
except (IOError, OSError) as err:
sys.exit('ERROR: unable to download latest version')
try:
with open(filename, 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError) as err:
sys.exit('ERROR: unable to overwrite current version')
sys.stderr.write(u'Done! Now you can run youtube-dl.\n')

View File

@@ -1,12 +0,0 @@
from distutils.core import setup
import py2exe
py2exe_options = {
"bundle_files": 1,
"compressed": 1,
"optimize": 2,
"dist_dir": '.',
"dll_excludes": ['w9xpopen.exe']
}
setup(console=['youtube-dl.py'], options={ "py2exe": py2exe_options }, zipfile=None)

View File

@@ -1,102 +0,0 @@
#!/usr/bin/env python
import sys, os
import urllib2
import json, hashlib
def rsa_verify(message, signature, key):
from struct import pack
from hashlib import sha256
from sys import version_info
def b(x):
if version_info[0] == 2: return x
else: return x.encode('latin1')
assert(type(message) == type(b('')))
block_size = 0
n = key[0]
while n:
block_size += 1
n >>= 8
signature = pow(int(signature, 16), key[1], key[0])
raw_bytes = []
while signature:
raw_bytes.insert(0, pack("B", signature & 0xFF))
signature >>= 8
signature = (block_size - len(raw_bytes)) * b('\x00') + b('').join(raw_bytes)
if signature[0:2] != b('\x00\x01'): return False
signature = signature[2:]
if not b('\x00') in signature: return False
signature = signature[signature.index(b('\x00'))+1:]
if not signature.startswith(b('\x30\x31\x30\x0D\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20')): return False
signature = signature[19:]
if signature != sha256(message).digest(): return False
return True
sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
sys.stderr.write(u'From now on, get the binaries from http://rg3.github.com/youtube-dl/download.html, not from the git repository.\n\n')
raw_input()
filename = sys.argv[0]
UPDATE_URL = "http://rg3.github.io/youtube-dl/update/"
VERSION_URL = UPDATE_URL + 'LATEST_VERSION'
JSON_URL = UPDATE_URL + 'versions.json'
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
if not os.access(filename, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % filename)
exe = os.path.abspath(filename)
directory = os.path.dirname(exe)
if not os.access(directory, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % directory)
try:
versions_info = urllib2.urlopen(JSON_URL).read().decode('utf-8')
versions_info = json.loads(versions_info)
except:
sys.exit(u'ERROR: can\'t obtain versions info. Please try again later.')
if not 'signature' in versions_info:
sys.exit(u'ERROR: the versions file is not signed or corrupted. Aborting.')
signature = versions_info['signature']
del versions_info['signature']
if not rsa_verify(json.dumps(versions_info, sort_keys=True), signature, UPDATES_RSA_KEY):
sys.exit(u'ERROR: the versions file signature is invalid. Aborting.')
version = versions_info['versions'][versions_info['latest']]
try:
urlh = urllib2.urlopen(version['exe'][0])
newcontent = urlh.read()
urlh.close()
except (IOError, OSError) as err:
sys.exit('ERROR: unable to download latest version')
newcontent_hash = hashlib.sha256(newcontent).hexdigest()
if newcontent_hash != version['exe'][1]:
sys.exit(u'ERROR: the downloaded file hash does not match. Aborting.')
try:
with open(exe + '.new', 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError) as err:
sys.exit(u'ERROR: unable to write the new version')
try:
bat = os.path.join(directory, 'youtube-dl-updater.bat')
b = open(bat, 'w')
b.write("""
echo Updating youtube-dl...
ping 127.0.0.1 -n 5 -w 1000 > NUL
move /Y "%s.new" "%s"
del "%s"
\n""" %(exe, exe, bat))
b.close()
os.startfile(bat)
except (IOError, OSError) as err:
sys.exit('ERROR: unable to overwrite current version')
sys.stderr.write(u'Done! Now you can run youtube-dl.\n')

View File

@@ -1,4 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
import os import os
from os.path import dirname as dirn from os.path import dirname as dirn
import sys import sys

View File

@@ -4,7 +4,6 @@
from __future__ import print_function from __future__ import print_function
import os.path import os.path
import pkg_resources
import warnings import warnings
import sys import sys
@@ -103,7 +102,9 @@ setup(
"Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7", "Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3", "Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.3" "Programming Language :: Python :: 3.2",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
], ],
**params **params

View File

@@ -72,8 +72,10 @@ class FakeYDL(YoutubeDL):
def expect_warning(self, regex): def expect_warning(self, regex):
# Silence an expected warning matching a regex # Silence an expected warning matching a regex
old_report_warning = self.report_warning old_report_warning = self.report_warning
def report_warning(self, message): def report_warning(self, message):
if re.match(regex, message): return if re.match(regex, message):
return
old_report_warning(message) old_report_warning(message)
self.report_warning = types.MethodType(report_warning, self) self.report_warning = types.MethodType(report_warning, self)

View File

@@ -266,6 +266,7 @@ class TestFormatSelection(unittest.TestCase):
'ext': 'mp4', 'ext': 'mp4',
'width': None, 'width': None,
} }
def fname(templ): def fname(templ):
ydl = YoutubeDL({'outtmpl': templ}) ydl = YoutubeDL({'outtmpl': templ})
return ydl.prepare_filename(info) return ydl.prepare_filename(info)

View File

@@ -40,18 +40,22 @@ from youtube_dl.extractor import get_info_extractor
RETRIES = 3 RETRIES = 3
class YoutubeDL(youtube_dl.YoutubeDL): class YoutubeDL(youtube_dl.YoutubeDL):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
self.to_stderr = self.to_screen self.to_stderr = self.to_screen
self.processed_info_dicts = [] self.processed_info_dicts = []
super(YoutubeDL, self).__init__(*args, **kwargs) super(YoutubeDL, self).__init__(*args, **kwargs)
def report_warning(self, message): def report_warning(self, message):
# Don't accept warnings during tests # Don't accept warnings during tests
raise ExtractorError(message) raise ExtractorError(message)
def process_info(self, info_dict): def process_info(self, info_dict):
self.processed_info_dicts.append(info_dict) self.processed_info_dicts.append(info_dict)
return super(YoutubeDL, self).process_info(info_dict) return super(YoutubeDL, self).process_info(info_dict)
def _file_md5(fn): def _file_md5(fn):
with open(fn, 'rb') as f: with open(fn, 'rb') as f:
return hashlib.md5(f.read()).hexdigest() return hashlib.md5(f.read()).hexdigest()
@@ -61,10 +65,13 @@ defs = gettestcases()
class TestDownload(unittest.TestCase): class TestDownload(unittest.TestCase):
maxDiff = None maxDiff = None
def setUp(self): def setUp(self):
self.defs = defs self.defs = defs
### Dynamically generate tests # Dynamically generate tests
def generator(test_case): def generator(test_case):
def test_template(self): def test_template(self):
@@ -90,7 +97,7 @@ def generator(test_case):
return return
for other_ie in other_ies: for other_ie in other_ies:
if not other_ie.working(): if not other_ie.working():
print_skipping(u'test depends on %sIE, marked as not WORKING' % other_ie.ie_key()) print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key())
return return
params = get_params(test_case.get('params', {})) params = get_params(test_case.get('params', {}))
@@ -101,6 +108,7 @@ def generator(test_case):
ydl = YoutubeDL(params, auto_init=False) ydl = YoutubeDL(params, auto_init=False)
ydl.add_default_info_extractors() ydl.add_default_info_extractors()
finished_hook_called = set() finished_hook_called = set()
def _hook(status): def _hook(status):
if status['status'] == 'finished': if status['status'] == 'finished':
finished_hook_called.add(status['filename']) finished_hook_called.add(status['filename'])
@@ -111,6 +119,7 @@ def generator(test_case):
return tc.get('file') or ydl.prepare_filename(tc.get('info_dict', {})) return tc.get('file') or ydl.prepare_filename(tc.get('info_dict', {}))
res_dict = None res_dict = None
def try_rm_tcs_files(tcs=None): def try_rm_tcs_files(tcs=None):
if tcs is None: if tcs is None:
tcs = test_cases tcs = test_cases
@@ -134,7 +143,7 @@ def generator(test_case):
raise raise
if try_num == RETRIES: if try_num == RETRIES:
report_warning(u'Failed due to network errors, skipping...') report_warning('Failed due to network errors, skipping...')
return return
print('Retrying: {0} failed tries\n\n##########\n\n'.format(try_num)) print('Retrying: {0} failed tries\n\n##########\n\n'.format(try_num))
@@ -206,7 +215,7 @@ def generator(test_case):
return test_template return test_template
### And add them to TestDownload # And add them to TestDownload
for n, test_case in enumerate(defs): for n, test_case in enumerate(defs):
test_method = generator(test_case) test_method = generator(test_case)
tname = 'test_' + str(test_case['name']) tname = 'test_' + str(test_case['name'])

View File

@@ -23,6 +23,7 @@ from youtube_dl.extractor import (
class BaseTestSubtitles(unittest.TestCase): class BaseTestSubtitles(unittest.TestCase):
url = None url = None
IE = None IE = None
def setUp(self): def setUp(self):
self.DL = FakeYDL() self.DL = FakeYDL()
self.ie = self.IE(self.DL) self.ie = self.IE(self.DL)

View File

@@ -9,14 +9,13 @@ rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
IGNORED_FILES = [ IGNORED_FILES = [
'setup.py', # http://bugs.python.org/issue13943 'setup.py', # http://bugs.python.org/issue13943
'conf.py',
'buildserver.py',
] ]
class TestUnicodeLiterals(unittest.TestCase): class TestUnicodeLiterals(unittest.TestCase):
def test_all_files(self): def test_all_files(self):
print('Skipping this test (not yet fully implemented)')
return
for dirpath, _, filenames in os.walk(rootDir): for dirpath, _, filenames in os.walk(rootDir):
for basename in filenames: for basename in filenames:
if not basename.endswith('.py'): if not basename.endswith('.py'):
@@ -30,10 +29,10 @@ class TestUnicodeLiterals(unittest.TestCase):
if "'" not in code and '"' not in code: if "'" not in code and '"' not in code:
continue continue
imps = 'from __future__ import unicode_literals' self.assertRegexpMatches(
self.assertTrue( code,
imps in code, r'(?:#.*\n*)?from __future__ import (?:[a-z_]+,\s*)*unicode_literals',
' %s missing in %s' % (imps, fn)) 'unicode_literals import missing in %s' % fn)
m = re.search(r'(?<=\s)u[\'"](?!\)|,|$)', code) m = re.search(r'(?<=\s)u[\'"](?!\)|,|$)', code)
if m is not None: if m is not None:

View File

@@ -45,9 +45,9 @@ from youtube_dl.utils import (
escape_rfc3986, escape_rfc3986,
escape_url, escape_url,
js_to_json, js_to_json,
get_filesystem_encoding,
intlist_to_bytes, intlist_to_bytes,
args_to_str, args_to_str,
parse_filesize,
) )
@@ -171,7 +171,7 @@ class TestUtil(unittest.TestCase):
self.assertEqual(find('media:song/url').text, 'http://server.com/download.mp3') self.assertEqual(find('media:song/url').text, 'http://server.com/download.mp3')
def test_smuggle_url(self): def test_smuggle_url(self):
data = {u"ö": u"ö", u"abc": [3]} data = {"ö": "ö", "abc": [3]}
url = 'https://foo.bar/baz?x=y#a' url = 'https://foo.bar/baz?x=y#a'
smug_url = smuggle_url(url, data) smug_url = smuggle_url(url, data)
unsmug_url, unsmug_data = unsmuggle_url(smug_url) unsmug_url, unsmug_data = unsmuggle_url(smug_url)
@@ -368,5 +368,14 @@ class TestUtil(unittest.TestCase):
'foo ba/r -baz \'2 be\' \'\'' 'foo ba/r -baz \'2 be\' \'\''
) )
def test_parse_filesize(self):
self.assertEqual(parse_filesize(None), None)
self.assertEqual(parse_filesize(''), None)
self.assertEqual(parse_filesize('91 B'), 91)
self.assertEqual(parse_filesize('foobar'), None)
self.assertEqual(parse_filesize('2 MiB'), 2097152)
self.assertEqual(parse_filesize('5 GB'), 5000000000)
self.assertEqual(parse_filesize('1.2Tb'), 1200000000000)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -1,5 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -31,17 +32,16 @@ params = get_params({
}) })
TEST_ID = 'gr51aVj-mLg' TEST_ID = 'gr51aVj-mLg'
ANNOTATIONS_FILE = TEST_ID + '.flv.annotations.xml' ANNOTATIONS_FILE = TEST_ID + '.flv.annotations.xml'
EXPECTED_ANNOTATIONS = ['Speech bubble', 'Note', 'Title', 'Spotlight', 'Label'] EXPECTED_ANNOTATIONS = ['Speech bubble', 'Note', 'Title', 'Spotlight', 'Label']
class TestAnnotations(unittest.TestCase): class TestAnnotations(unittest.TestCase):
def setUp(self): def setUp(self):
# Clear old files # Clear old files
self.tearDown() self.tearDown()
def test_info_json(self): def test_info_json(self):
expected = list(EXPECTED_ANNOTATIONS) # Two annotations could have the same text. expected = list(EXPECTED_ANNOTATIONS) # Two annotations could have the same text.
ie = youtube_dl.extractor.YoutubeIE() ie = youtube_dl.extractor.YoutubeIE()
@@ -71,7 +71,6 @@ class TestAnnotations(unittest.TestCase):
# We should have seen (and removed) all the expected annotation texts. # We should have seen (and removed) all the expected annotation texts.
self.assertEqual(len(expected), 0, 'Not all expected annotations were found.') self.assertEqual(len(expected), 0, 'Not all expected annotations were found.')
def tearDown(self): def tearDown(self):
try_rm(ANNOTATIONS_FILE) try_rm(ANNOTATIONS_FILE)

View File

@@ -1,5 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -32,7 +33,7 @@ params = get_params({
TEST_ID = 'BaW_jenozKc' TEST_ID = 'BaW_jenozKc'
INFO_JSON_FILE = TEST_ID + '.info.json' INFO_JSON_FILE = TEST_ID + '.info.json'
DESCRIPTION_FILE = TEST_ID + '.mp4.description' DESCRIPTION_FILE = TEST_ID + '.mp4.description'
EXPECTED_DESCRIPTION = u'''test chars: "'/\ä↭𝕐 EXPECTED_DESCRIPTION = '''test chars: "'/\ä↭𝕐
test URL: https://github.com/rg3/youtube-dl/issues/1892 test URL: https://github.com/rg3/youtube-dl/issues/1892
This is a test video for youtube-dl. This is a test video for youtube-dl.
@@ -53,11 +54,11 @@ class TestInfoJSON(unittest.TestCase):
self.assertTrue(os.path.exists(INFO_JSON_FILE)) self.assertTrue(os.path.exists(INFO_JSON_FILE))
with io.open(INFO_JSON_FILE, 'r', encoding='utf-8') as jsonf: with io.open(INFO_JSON_FILE, 'r', encoding='utf-8') as jsonf:
jd = json.load(jsonf) jd = json.load(jsonf)
self.assertEqual(jd['upload_date'], u'20121002') self.assertEqual(jd['upload_date'], '20121002')
self.assertEqual(jd['description'], EXPECTED_DESCRIPTION) self.assertEqual(jd['description'], EXPECTED_DESCRIPTION)
self.assertEqual(jd['id'], TEST_ID) self.assertEqual(jd['id'], TEST_ID)
self.assertEqual(jd['extractor'], 'youtube') self.assertEqual(jd['extractor'], 'youtube')
self.assertEqual(jd['title'], u'''youtube-dl test video "'/\ä↭𝕐''') self.assertEqual(jd['title'], '''youtube-dl test video "'/\ä↭𝕐''')
self.assertEqual(jd['uploader'], 'Philipp Hagemeister') self.assertEqual(jd['uploader'], 'Philipp Hagemeister')
self.assertTrue(os.path.exists(DESCRIPTION_FILE)) self.assertTrue(os.path.exists(DESCRIPTION_FILE))

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -12,10 +13,6 @@ from test.helper import FakeYDL
from youtube_dl.extractor import ( from youtube_dl.extractor import (
YoutubePlaylistIE, YoutubePlaylistIE,
YoutubeIE, YoutubeIE,
YoutubeChannelIE,
YoutubeShowIE,
YoutubeTopListIE,
YoutubeSearchURLIE,
) )

View File

@@ -29,7 +29,6 @@ from .compat import (
compat_str, compat_str,
compat_urllib_error, compat_urllib_error,
compat_urllib_request, compat_urllib_request,
shlex_quote,
) )
from .utils import ( from .utils import (
escape_url, escape_url,
@@ -700,14 +699,17 @@ class YoutubeDL(object):
self.report_warning( self.report_warning(
'Extractor %s returned a compat_list result. ' 'Extractor %s returned a compat_list result. '
'It needs to be updated.' % ie_result.get('extractor')) 'It needs to be updated.' % ie_result.get('extractor'))
def _fixup(r): def _fixup(r):
self.add_extra_info(r, self.add_extra_info(
r,
{ {
'extractor': ie_result['extractor'], 'extractor': ie_result['extractor'],
'webpage_url': ie_result['webpage_url'], 'webpage_url': ie_result['webpage_url'],
'webpage_url_basename': url_basename(ie_result['webpage_url']), 'webpage_url_basename': url_basename(ie_result['webpage_url']),
'extractor_key': ie_result['extractor_key'], 'extractor_key': ie_result['extractor_key'],
}) }
)
return r return r
ie_result['entries'] = [ ie_result['entries'] = [
self.process_ie_result(_fixup(r), download, extra_info) self.process_ie_result(_fixup(r), download, extra_info)
@@ -1428,4 +1430,3 @@ class YoutubeDL(object):
if encoding is None: if encoding is None:
encoding = preferredencoding() encoding = preferredencoding()
return encoding return encoding

View File

@@ -128,7 +128,6 @@ def _real_main(argv=None):
compat_print(desc) compat_print(desc)
sys.exit(0) sys.exit(0)
# Conflicting, missing and erroneous options # Conflicting, missing and erroneous options
if opts.usenetrc and (opts.username is not None or opts.password is not None): if opts.usenetrc and (opts.username is not None or opts.password is not None):
parser.error('using .netrc conflicts with giving username/password') parser.error('using .netrc conflicts with giving username/password')
@@ -190,7 +189,7 @@ def _real_main(argv=None):
# --all-sub automatically sets --write-sub if --write-auto-sub is not given # --all-sub automatically sets --write-sub if --write-auto-sub is not given
# this was the old behaviour if only --all-sub was given. # this was the old behaviour if only --all-sub was given.
if opts.allsubtitles and (opts.writeautomaticsub == False): if opts.allsubtitles and not opts.writeautomaticsub:
opts.writesubtitles = True opts.writesubtitles = True
if sys.version_info < (3,): if sys.version_info < (3,):
@@ -317,7 +316,6 @@ def _real_main(argv=None):
ydl.add_post_processor(FFmpegAudioFixPP()) ydl.add_post_processor(FFmpegAudioFixPP())
ydl.add_post_processor(AtomicParsleyPP()) ydl.add_post_processor(AtomicParsleyPP())
# Please keep ExecAfterDownload towards the bottom as it allows the user to modify the final file in any way. # Please keep ExecAfterDownload towards the bottom as it allows the user to modify the final file in any way.
# So if the user is able to remove the file before your postprocessor runs it might cause a few problems. # So if the user is able to remove the file before your postprocessor runs it might cause a few problems.
if opts.exec_cmd: if opts.exec_cmd:

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
# Execute with # Execute with
# $ python youtube_dl/__main__.py (2.6+) # $ python youtube_dl/__main__.py (2.6+)

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
__all__ = ['aes_encrypt', 'key_expansion', 'aes_ctr_decrypt', 'aes_cbc_decrypt', 'aes_decrypt_text'] __all__ = ['aes_encrypt', 'key_expansion', 'aes_ctr_decrypt', 'aes_cbc_decrypt', 'aes_decrypt_text']
import base64 import base64
@@ -7,6 +9,7 @@ from .utils import bytes_to_intlist, intlist_to_bytes
BLOCK_SIZE_BYTES = 16 BLOCK_SIZE_BYTES = 16
def aes_ctr_decrypt(data, key, counter): def aes_ctr_decrypt(data, key, counter):
""" """
Decrypt with aes in counter mode Decrypt with aes in counter mode
@@ -32,6 +35,7 @@ def aes_ctr_decrypt(data, key, counter):
return decrypted_data return decrypted_data
def aes_cbc_decrypt(data, key, iv): def aes_cbc_decrypt(data, key, iv):
""" """
Decrypt with aes in CBC mode Decrypt with aes in CBC mode
@@ -57,6 +61,7 @@ def aes_cbc_decrypt(data, key, iv):
return decrypted_data return decrypted_data
def key_expansion(data): def key_expansion(data):
""" """
Generate key schedule Generate key schedule
@@ -91,6 +96,7 @@ def key_expansion(data):
return data return data
def aes_encrypt(data, expanded_key): def aes_encrypt(data, expanded_key):
""" """
Encrypt one block with aes Encrypt one block with aes
@@ -111,6 +117,7 @@ def aes_encrypt(data, expanded_key):
return data return data
def aes_decrypt(data, expanded_key): def aes_decrypt(data, expanded_key):
""" """
Decrypt one block with aes Decrypt one block with aes
@@ -131,6 +138,7 @@ def aes_decrypt(data, expanded_key):
return data return data
def aes_decrypt_text(data, password, key_size_bytes): def aes_decrypt_text(data, password, key_size_bytes):
""" """
Decrypt text Decrypt text
@@ -157,6 +165,7 @@ def aes_decrypt_text(data, password, key_size_bytes):
class Counter: class Counter:
__value = nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES) __value = nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES)
def next_value(self): def next_value(self):
temp = self.__value temp = self.__value
self.__value = inc(self.__value) self.__value = inc(self.__value)
@@ -241,15 +250,19 @@ RIJNDAEL_LOG_TABLE = (0x00, 0x00, 0x19, 0x01, 0x32, 0x02, 0x1a, 0xc6, 0x4b, 0xc7
0x44, 0x11, 0x92, 0xd9, 0x23, 0x20, 0x2e, 0x89, 0xb4, 0x7c, 0xb8, 0x26, 0x77, 0x99, 0xe3, 0xa5, 0x44, 0x11, 0x92, 0xd9, 0x23, 0x20, 0x2e, 0x89, 0xb4, 0x7c, 0xb8, 0x26, 0x77, 0x99, 0xe3, 0xa5,
0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07) 0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07)
def sub_bytes(data): def sub_bytes(data):
return [SBOX[x] for x in data] return [SBOX[x] for x in data]
def sub_bytes_inv(data): def sub_bytes_inv(data):
return [SBOX_INV[x] for x in data] return [SBOX_INV[x] for x in data]
def rotate(data): def rotate(data):
return data[1:] + [data[0]] return data[1:] + [data[0]]
def key_schedule_core(data, rcon_iteration): def key_schedule_core(data, rcon_iteration):
data = rotate(data) data = rotate(data)
data = sub_bytes(data) data = sub_bytes(data)
@@ -257,14 +270,17 @@ def key_schedule_core(data, rcon_iteration):
return data return data
def xor(data1, data2): def xor(data1, data2):
return [x ^ y for x, y in zip(data1, data2)] return [x ^ y for x, y in zip(data1, data2)]
def rijndael_mul(a, b): def rijndael_mul(a, b):
if(a == 0 or b == 0): if(a == 0 or b == 0):
return 0 return 0
return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF] return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF]
def mix_column(data, matrix): def mix_column(data, matrix):
data_mixed = [] data_mixed = []
for row in range(4): for row in range(4):
@@ -275,6 +291,7 @@ def mix_column(data, matrix):
data_mixed.append(mixed) data_mixed.append(mixed)
return data_mixed return data_mixed
def mix_columns(data, matrix=MIX_COLUMN_MATRIX): def mix_columns(data, matrix=MIX_COLUMN_MATRIX):
data_mixed = [] data_mixed = []
for i in range(4): for i in range(4):
@@ -282,9 +299,11 @@ def mix_columns(data, matrix=MIX_COLUMN_MATRIX):
data_mixed += mix_column(column, matrix) data_mixed += mix_column(column, matrix)
return data_mixed return data_mixed
def mix_columns_inv(data): def mix_columns_inv(data):
return mix_columns(data, MIX_COLUMN_MATRIX_INV) return mix_columns(data, MIX_COLUMN_MATRIX_INV)
def shift_rows(data): def shift_rows(data):
data_shifted = [] data_shifted = []
for column in range(4): for column in range(4):
@@ -292,6 +311,7 @@ def shift_rows(data):
data_shifted.append(data[((column + row) & 0b11) * 4 + row]) data_shifted.append(data[((column + row) & 0b11) * 4 + row])
return data_shifted return data_shifted
def shift_rows_inv(data): def shift_rows_inv(data):
data_shifted = [] data_shifted = []
for column in range(4): for column in range(4):
@@ -299,6 +319,7 @@ def shift_rows_inv(data):
data_shifted.append(data[((column - row) & 0b11) * 4 + row]) data_shifted.append(data[((column - row) & 0b11) * 4 + row])
return data_shifted return data_shifted
def inc(data): def inc(data):
data = data[:] # copy data = data[:] # copy
for i in range(len(data) - 1, -1, -1): for i in range(len(data) - 1, -1, -1):

View File

@@ -182,8 +182,10 @@ except ImportError: # Python < 3.3
def compat_ord(c): def compat_ord(c):
if type(c) is int: return c if type(c) is int:
else: return ord(c) return c
else:
return ord(c)
if sys.version_info >= (3, 0): if sys.version_info >= (3, 0):
@@ -268,7 +270,7 @@ if sys.version_info < (3, 0):
print(s.encode(preferredencoding(), 'xmlcharrefreplace')) print(s.encode(preferredencoding(), 'xmlcharrefreplace'))
else: else:
def compat_print(s): def compat_print(s):
assert type(s) == type(u'') assert isinstance(s, compat_str)
print(s) print(s)

View File

@@ -30,3 +30,8 @@ def get_suitable_downloader(info_dict):
return F4mFD return F4mFD
else: else:
return HttpFD return HttpFD
__all__ = [
'get_suitable_downloader',
'FileDownloader',
]

View File

@@ -225,13 +225,15 @@ class F4mFD(FileDownloader):
self.to_screen('[download] Downloading f4m manifest') self.to_screen('[download] Downloading f4m manifest')
manifest = self.ydl.urlopen(man_url).read() manifest = self.ydl.urlopen(man_url).read()
self.report_destination(filename) self.report_destination(filename)
http_dl = HttpQuietDownloader(self.ydl, http_dl = HttpQuietDownloader(
self.ydl,
{ {
'continuedl': True, 'continuedl': True,
'quiet': True, 'quiet': True,
'noprogress': True, 'noprogress': True,
'test': self.params.get('test', False), 'test': self.params.get('test', False),
}) }
)
doc = etree.fromstring(manifest) doc = etree.fromstring(manifest)
formats = [(int(f.attrib.get('bitrate', -1)), f) for f in doc.findall(_add_ns('media'))] formats = [(int(f.attrib.get('bitrate', -1)), f) for f in doc.findall(_add_ns('media'))]

View File

@@ -28,14 +28,14 @@ class HlsFD(FileDownloader):
if check_executable(program, ['-version']): if check_executable(program, ['-version']):
break break
else: else:
self.report_error(u'm3u8 download detected but ffmpeg or avconv could not be found. Please install one.') self.report_error('m3u8 download detected but ffmpeg or avconv could not be found. Please install one.')
return False return False
cmd = [program] + args cmd = [program] + args
retval = subprocess.call(cmd) retval = subprocess.call(cmd)
if retval == 0: if retval == 0:
fsize = os.path.getsize(encodeFilename(tmpfilename)) fsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen(u'\r[%s] %s bytes' % (cmd[0], fsize)) self.to_screen('\r[%s] %s bytes' % (cmd[0], fsize))
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
self._hook_progress({ self._hook_progress({
'downloaded_bytes': fsize, 'downloaded_bytes': fsize,
@@ -45,8 +45,8 @@ class HlsFD(FileDownloader):
}) })
return True return True
else: else:
self.to_stderr(u"\n") self.to_stderr('\n')
self.report_error(u'%s exited with code %d' % (program, retval)) self.report_error('%s exited with code %d' % (program, retval))
return False return False
@@ -101,4 +101,3 @@ class NativeHlsFD(FileDownloader):
}) })
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
return True return True

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
import os import os
import time import time
@@ -106,7 +108,7 @@ class HttpFD(FileDownloader):
self.report_retry(count, retries) self.report_retry(count, retries)
if count > retries: if count > retries:
self.report_error(u'giving up after %s retries' % retries) self.report_error('giving up after %s retries' % retries)
return False return False
data_len = data.info().get('Content-length', None) data_len = data.info().get('Content-length', None)
@@ -124,10 +126,10 @@ class HttpFD(FileDownloader):
min_data_len = self.params.get("min_filesize", None) min_data_len = self.params.get("min_filesize", None)
max_data_len = self.params.get("max_filesize", None) max_data_len = self.params.get("max_filesize", None)
if min_data_len is not None and data_len < min_data_len: if min_data_len is not None and data_len < min_data_len:
self.to_screen(u'\r[download] File is smaller than min-filesize (%s bytes < %s bytes). Aborting.' % (data_len, min_data_len)) self.to_screen('\r[download] File is smaller than min-filesize (%s bytes < %s bytes). Aborting.' % (data_len, min_data_len))
return False return False
if max_data_len is not None and data_len > max_data_len: if max_data_len is not None and data_len > max_data_len:
self.to_screen(u'\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len)) self.to_screen('\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len))
return False return False
data_len_str = format_bytes(data_len) data_len_str = format_bytes(data_len)
@@ -151,13 +153,13 @@ class HttpFD(FileDownloader):
filename = self.undo_temp_name(tmpfilename) filename = self.undo_temp_name(tmpfilename)
self.report_destination(filename) self.report_destination(filename)
except (OSError, IOError) as err: except (OSError, IOError) as err:
self.report_error(u'unable to open for writing: %s' % str(err)) self.report_error('unable to open for writing: %s' % str(err))
return False return False
try: try:
stream.write(data_block) stream.write(data_block)
except (IOError, OSError) as err: except (IOError, OSError) as err:
self.to_stderr(u"\n") self.to_stderr('\n')
self.report_error(u'unable to write data: %s' % str(err)) self.report_error('unable to write data: %s' % str(err))
return False return False
if not self.params.get('noresizebuffer', False): if not self.params.get('noresizebuffer', False):
block_size = self.best_block_size(after - before, len(data_block)) block_size = self.best_block_size(after - before, len(data_block))
@@ -188,10 +190,10 @@ class HttpFD(FileDownloader):
self.slow_down(start, byte_counter - resume_len) self.slow_down(start, byte_counter - resume_len)
if stream is None: if stream is None:
self.to_stderr(u"\n") self.to_stderr('\n')
self.report_error(u'Did not get any data blocks') self.report_error('Did not get any data blocks')
return False return False
if tmpfilename != u'-': if tmpfilename != '-':
stream.close() stream.close()
self.report_finish(data_len_str, (time.time() - start)) self.report_finish(data_len_str, (time.time() - start))
if data_len is not None and byte_counter != data_len: if data_len is not None and byte_counter != data_len:

View File

@@ -1,7 +1,10 @@
from __future__ import unicode_literals
import os import os
import subprocess import subprocess
from .common import FileDownloader from .common import FileDownloader
from ..compat import compat_subprocess_get_DEVNULL
from ..utils import ( from ..utils import (
encodeFilename, encodeFilename,
) )
@@ -13,19 +16,23 @@ class MplayerFD(FileDownloader):
self.report_destination(filename) self.report_destination(filename)
tmpfilename = self.temp_name(filename) tmpfilename = self.temp_name(filename)
args = ['mplayer', '-really-quiet', '-vo', 'null', '-vc', 'dummy', '-dumpstream', '-dumpfile', tmpfilename, url] args = [
'mplayer', '-really-quiet', '-vo', 'null', '-vc', 'dummy',
'-dumpstream', '-dumpfile', tmpfilename, url]
# Check for mplayer first # Check for mplayer first
try: try:
subprocess.call(['mplayer', '-h'], stdout=(open(os.path.devnull, 'w')), stderr=subprocess.STDOUT) subprocess.call(
['mplayer', '-h'],
stdout=compat_subprocess_get_DEVNULL(), stderr=subprocess.STDOUT)
except (OSError, IOError): except (OSError, IOError):
self.report_error(u'MMS or RTSP download detected but "%s" could not be run' % args[0]) self.report_error('MMS or RTSP download detected but "%s" could not be run' % args[0])
return False return False
# Download using mplayer. # Download using mplayer.
retval = subprocess.call(args) retval = subprocess.call(args)
if retval == 0: if retval == 0:
fsize = os.path.getsize(encodeFilename(tmpfilename)) fsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen(u'\r[%s] %s bytes' % (args[0], fsize)) self.to_screen('\r[%s] %s bytes' % (args[0], fsize))
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
self._hook_progress({ self._hook_progress({
'downloaded_bytes': fsize, 'downloaded_bytes': fsize,
@@ -35,6 +42,6 @@ class MplayerFD(FileDownloader):
}) })
return True return True
else: else:
self.to_stderr(u"\n") self.to_stderr('\n')
self.report_error(u'mplayer exited with code %d' % retval) self.report_error('mplayer exited with code %d' % retval)
return False return False

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
from .abc import ABCIE from .abc import ABCIE
from .academicearth import AcademicEarthCourseIE from .academicearth import AcademicEarthCourseIE
from .addanime import AddAnimeIE from .addanime import AddAnimeIE
@@ -32,6 +34,7 @@ from .bilibili import BiliBiliIE
from .blinkx import BlinkxIE from .blinkx import BlinkxIE
from .bliptv import BlipTVIE, BlipTVUserIE from .bliptv import BlipTVIE, BlipTVUserIE
from .bloomberg import BloombergIE from .bloomberg import BloombergIE
from .bpb import BpbIE
from .br import BRIE from .br import BRIE
from .breakcom import BreakIE from .breakcom import BreakIE
from .brightcove import BrightcoveIE from .brightcove import BrightcoveIE
@@ -372,6 +375,7 @@ from .syfy import SyfyIE
from .sztvhu import SztvHuIE from .sztvhu import SztvHuIE
from .tagesschau import TagesschauIE from .tagesschau import TagesschauIE
from .tapely import TapelyIE from .tapely import TapelyIE
from .tass import TassIE
from .teachertube import ( from .teachertube import (
TeacherTubeIE, TeacherTubeIE,
TeacherTubeUserIE, TeacherTubeUserIE,
@@ -392,6 +396,7 @@ from .thesixtyone import TheSixtyOneIE
from .thisav import ThisAVIE from .thisav import ThisAVIE
from .tinypic import TinyPicIE from .tinypic import TinyPicIE
from .tlc import TlcIE, TlcDeIE from .tlc import TlcIE, TlcDeIE
from .tmz import TMZIE
from .tnaflix import TNAFlixIE from .tnaflix import TNAFlixIE
from .thvideo import ( from .thvideo import (
THVideoIE, THVideoIE,
@@ -405,6 +410,7 @@ from .trutube import TruTubeIE
from .tube8 import Tube8IE from .tube8 import Tube8IE
from .tudou import TudouIE from .tudou import TudouIE
from .tumblr import TumblrIE from .tumblr import TumblrIE
from .tunein import TuneInIE
from .turbo import TurboIE from .turbo import TurboIE
from .tutv import TutvIE from .tutv import TutvIE
from .tvigle import TvigleIE from .tvigle import TvigleIE
@@ -481,6 +487,7 @@ from .wrzuta import WrzutaIE
from .xbef import XBefIE from .xbef import XBefIE
from .xboxclips import XboxClipsIE from .xboxclips import XboxClipsIE
from .xhamster import XHamsterIE from .xhamster import XHamsterIE
from .xminus import XMinusIE
from .xnxx import XNXXIE from .xnxx import XNXXIE
from .xvideos import XVideosIE from .xvideos import XVideosIE
from .xtube import XTubeUserIE, XTubeIE from .xtube import XTubeUserIE, XTubeIE
@@ -511,6 +518,10 @@ from .youtube import (
YoutubeWatchLaterIE, YoutubeWatchLaterIE,
) )
from .zdf import ZDFIE from .zdf import ZDFIE
from .zingmp3 import (
ZingMp3SongIE,
ZingMp3AlbumIE,
)
_ALL_CLASSES = [ _ALL_CLASSES = [
klass klass

View File

@@ -1,4 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -18,15 +19,14 @@ class AcademicEarthCourseIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
m = re.match(self._VALID_URL, url) playlist_id = self._match_id(url)
playlist_id = m.group('id')
webpage = self._download_webpage(url, playlist_id) webpage = self._download_webpage(url, playlist_id)
title = self._html_search_regex( title = self._html_search_regex(
r'<h1 class="playlist-name"[^>]*?>(.*?)</h1>', webpage, u'title') r'<h1 class="playlist-name"[^>]*?>(.*?)</h1>', webpage, 'title')
description = self._html_search_regex( description = self._html_search_regex(
r'<p class="excerpt"[^>]*?>(.*?)</p>', r'<p class="excerpt"[^>]*?>(.*?)</p>',
webpage, u'description', fatal=False) webpage, 'description', fatal=False)
urls = re.findall( urls = re.findall(
r'<li class="lecture-preview">\s*?<a target="_blank" href="([^"]+)">', r'<li class="lecture-preview">\s*?<a target="_blank" href="([^"]+)">',
webpage) webpage)

View File

@@ -15,8 +15,7 @@ from ..utils import (
class AddAnimeIE(InfoExtractor): class AddAnimeIE(InfoExtractor):
_VALID_URL = r'^http://(?:\w+\.)?add-anime\.net/watch_video\.php\?(?:.*?)v=(?P<id>[\w_]+)(?:.*)'
_VALID_URL = r'^http://(?:\w+\.)?add-anime\.net/watch_video\.php\?(?:.*?)v=(?P<video_id>[\w_]+)(?:.*)'
_TEST = { _TEST = {
'url': 'http://www.add-anime.net/watch_video.php?v=24MR3YO5SAS9', 'url': 'http://www.add-anime.net/watch_video.php?v=24MR3YO5SAS9',
'md5': '72954ea10bc979ab5e2eb288b21425a0', 'md5': '72954ea10bc979ab5e2eb288b21425a0',
@@ -29,9 +28,9 @@ class AddAnimeIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url)
try: try:
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('video_id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
except ExtractorError as ee: except ExtractorError as ee:
if not isinstance(ee.cause, compat_HTTPError) or \ if not isinstance(ee.cause, compat_HTTPError) or \
@@ -49,7 +48,7 @@ class AddAnimeIE(InfoExtractor):
r'a\.value = ([0-9]+)[+]([0-9]+)[*]([0-9]+);', r'a\.value = ([0-9]+)[+]([0-9]+)[*]([0-9]+);',
redir_webpage) redir_webpage)
if av is None: if av is None:
raise ExtractorError(u'Cannot find redirect math task') raise ExtractorError('Cannot find redirect math task')
av_res = int(av.group(1)) + int(av.group(2)) * int(av.group(3)) av_res = int(av.group(1)) + int(av.group(2)) * int(av.group(3))
parsed_url = compat_urllib_parse_urlparse(url) parsed_url = compat_urllib_parse_urlparse(url)

View File

@@ -5,6 +5,7 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
class AdultSwimIE(InfoExtractor): class AdultSwimIE(InfoExtractor):
_VALID_URL = r'https?://video\.adultswim\.com/(?P<path>.+?)(?:\.html)?(?:\?.*)?(?:#.*)?$' _VALID_URL = r'https?://video\.adultswim\.com/(?P<path>.+?)(?:\.html)?(?:\?.*)?(?:#.*)?$'
_TEST = { _TEST = {

View File

@@ -1,5 +1,4 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
@@ -26,8 +25,7 @@ class AparatIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
m = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = m.group('id')
# Note: There is an easier-to-parse configuration at # Note: There is an easier-to-parse configuration at
# http://www.aparat.com/video/video/config/videohash/%video_id # http://www.aparat.com/video/video/config/videohash/%video_id
@@ -40,15 +38,15 @@ class AparatIE(InfoExtractor):
for i, video_url in enumerate(video_urls): for i, video_url in enumerate(video_urls):
req = HEADRequest(video_url) req = HEADRequest(video_url)
res = self._request_webpage( res = self._request_webpage(
req, video_id, note=u'Testing video URL %d' % i, errnote=False) req, video_id, note='Testing video URL %d' % i, errnote=False)
if res: if res:
break break
else: else:
raise ExtractorError(u'No working video URLs found') raise ExtractorError('No working video URLs found')
title = self._search_regex(r'\s+title:\s*"([^"]+)"', webpage, u'title') title = self._search_regex(r'\s+title:\s*"([^"]+)"', webpage, 'title')
thumbnail = self._search_regex( thumbnail = self._search_regex(
r'\s+image:\s*"([^"]+)"', webpage, u'thumbnail', fatal=False) r'\s+image:\s*"([^"]+)"', webpage, 'thumbnail', fatal=False)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -70,15 +70,17 @@ class AppleTrailersIE(InfoExtractor):
uploader_id = mobj.group('company') uploader_id = mobj.group('company')
playlist_url = compat_urlparse.urljoin(url, 'includes/playlists/itunes.inc') playlist_url = compat_urlparse.urljoin(url, 'includes/playlists/itunes.inc')
def fix_html(s): def fix_html(s):
s = re.sub(r'(?s)<script[^<]*?>.*?</script>', '', s) s = re.sub(r'(?s)<script[^<]*?>.*?</script>', '', s)
s = re.sub(r'<img ([^<]*?)>', r'<img \1/>', s) s = re.sub(r'<img ([^<]*?)>', r'<img \1/>', s)
# The ' in the onClick attributes are not escaped, it couldn't be parsed # The ' in the onClick attributes are not escaped, it couldn't be parsed
# like: http://trailers.apple.com/trailers/wb/gravity/ # like: http://trailers.apple.com/trailers/wb/gravity/
def _clean_json(m): def _clean_json(m):
return 'iTunes.playURL(%s);' % m.group(1).replace('\'', '&#39;') return 'iTunes.playURL(%s);' % m.group(1).replace('\'', '&#39;')
s = re.sub(self._JSON_RE, _clean_json, s) s = re.sub(self._JSON_RE, _clean_json, s)
s = '<html>' + s + u'</html>' s = '<html>%s</html>' % s
return s return s
doc = self._download_xml(playlist_url, movie, transform_source=fix_html) doc = self._download_xml(playlist_url, movie, transform_source=fix_html)

View File

@@ -192,4 +192,3 @@ class ARDIE(InfoExtractor):
'upload_date': upload_date, 'upload_date': upload_date,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
} }

View File

@@ -18,7 +18,7 @@ class BambuserIE(InfoExtractor):
_TEST = { _TEST = {
'url': 'http://bambuser.com/v/4050584', 'url': 'http://bambuser.com/v/4050584',
# MD5 seems to be flaky, see https://travis-ci.org/rg3/youtube-dl/jobs/14051016#L388 # MD5 seems to be flaky, see https://travis-ci.org/rg3/youtube-dl/jobs/14051016#L388
#u'md5': 'fba8f7693e48fd4e8641b3fd5539a641', # 'md5': 'fba8f7693e48fd4e8641b3fd5539a641',
'info_dict': { 'info_dict': {
'id': '4050584', 'id': '4050584',
'ext': 'flv', 'ext': 'flv',
@@ -73,7 +73,8 @@ class BambuserChannelIE(InfoExtractor):
urls = [] urls = []
last_id = '' last_id = ''
for i in itertools.count(1): for i in itertools.count(1):
req_url = ('http://bambuser.com/xhr-api/index.php?username={user}' req_url = (
'http://bambuser.com/xhr-api/index.php?username={user}'
'&sort=created&access_mode=0%2C1%2C2&limit={count}' '&sort=created&access_mode=0%2C1%2C2&limit={count}'
'&method=broadcast&format=json&vid_older_than={last}' '&method=broadcast&format=json&vid_older_than={last}'
).format(user=user, count=self._STEP, last=last_id) ).format(user=user, count=self._STEP, last=last_id)

View File

@@ -0,0 +1,37 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
class BpbIE(InfoExtractor):
IE_DESC = 'Bundeszentrale für politische Bildung'
_VALID_URL = r'http://www\.bpb\.de/mediathek/(?P<id>[0-9]+)/'
_TEST = {
'url': 'http://www.bpb.de/mediathek/297/joachim-gauck-zu-1989-und-die-erinnerung-an-die-ddr',
'md5': '0792086e8e2bfbac9cdf27835d5f2093',
'info_dict': {
'id': '297',
'ext': 'mp4',
'title': 'Joachim Gauck zu 1989 und die Erinnerung an die DDR',
'description': 'Joachim Gauck, erster Beauftragter für die Stasi-Unterlagen, spricht auf dem Geschichtsforum über die friedliche Revolution 1989 und eine "gewisse Traurigkeit" im Umgang mit der DDR-Vergangenheit.'
}
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
title = self._html_search_regex(
r'<h2 class="white">(.*?)</h2>', webpage, 'title')
video_url = self._html_search_regex(
r'(http://film\.bpb\.de/player/dokument_[0-9]+\.mp4)',
webpage, 'video URL')
return {
'id': video_id,
'url': video_url,
'title': title,
'description': self._og_search_description(webpage),
}

View File

@@ -45,4 +45,4 @@ class CBSIE(InfoExtractor):
real_id = self._search_regex( real_id = self._search_regex(
r"video\.settings\.pid\s*=\s*'([^']+)';", r"video\.settings\.pid\s*=\s*'([^']+)';",
webpage, 'real video ID') webpage, 'real video ID')
return self.url_result(u'theplatform:%s' % real_id) return self.url_result('theplatform:%s' % real_id)

View File

@@ -5,6 +5,7 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ExtractorError from ..utils import ExtractorError
class Channel9IE(InfoExtractor): class Channel9IE(InfoExtractor):
''' '''
Common extractor for channel9.msdn.com. Common extractor for channel9.msdn.com.
@@ -187,7 +188,8 @@ class Channel9IE(InfoExtractor):
view_count = self._extract_view_count(html) view_count = self._extract_view_count(html)
comment_count = self._extract_comment_count(html) comment_count = self._extract_comment_count(html)
common = {'_type': 'video', common = {
'_type': 'video',
'id': content_path, 'id': content_path,
'description': description, 'description': description,
'thumbnail': thumbnail, 'thumbnail': thumbnail,

View File

@@ -24,7 +24,7 @@ class ClipfishIE(InfoExtractor):
'title': 'FIFA 14 - E3 2013 Trailer', 'title': 'FIFA 14 - E3 2013 Trailer',
'duration': 82, 'duration': 82,
}, },
u'skip': 'Blocked in the US' 'skip': 'Blocked in the US'
} }
def _real_extract(self, url): def _real_extract(self, url):
@@ -34,7 +34,7 @@ class ClipfishIE(InfoExtractor):
info_url = ('http://www.clipfish.de/devxml/videoinfo/%s?ts=%d' % info_url = ('http://www.clipfish.de/devxml/videoinfo/%s?ts=%d' %
(video_id, int(time.time()))) (video_id, int(time.time())))
doc = self._download_xml( doc = self._download_xml(
info_url, video_id, note=u'Downloading info page') info_url, video_id, note='Downloading info page')
title = doc.find('title').text title = doc.find('title').text
video_url = doc.find('filename').text video_url = doc.find('filename').text
if video_url is None: if video_url is None:

View File

@@ -39,6 +39,7 @@ class ClipsyndicateIE(InfoExtractor):
transform_source=fix_xml_ampersands) transform_source=fix_xml_ampersands)
track_doc = pdoc.find('trackList/track') track_doc = pdoc.find('trackList/track')
def find_param(name): def find_param(name):
node = find_xpath_attr(track_doc, './/param', 'name', name) node = find_xpath_attr(track_doc, './/param', 'name', name)
if node is not None: if node is not None:

View File

@@ -25,8 +25,7 @@ class CNNIE(InfoExtractor):
'duration': 135, 'duration': 135,
'upload_date': '20130609', 'upload_date': '20130609',
}, },
}, }, {
{
"url": "http://edition.cnn.com/video/?/video/us/2013/08/21/sot-student-gives-epic-speech.georgia-institute-of-technology&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+rss%2Fcnn_topstories+%28RSS%3A+Top+Stories%29", "url": "http://edition.cnn.com/video/?/video/us/2013/08/21/sot-student-gives-epic-speech.georgia-institute-of-technology&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+rss%2Fcnn_topstories+%28RSS%3A+Top+Stories%29",
"md5": "b5cc60c60a3477d185af8f19a2a26f4e", "md5": "b5cc60c60a3477d185af8f19a2a26f4e",
"info_dict": { "info_dict": {

View File

@@ -10,7 +10,8 @@ from ..utils import int_or_none
class CollegeHumorIE(InfoExtractor): class CollegeHumorIE(InfoExtractor):
_VALID_URL = r'^(?:https?://)?(?:www\.)?collegehumor\.com/(video|embed|e)/(?P<videoid>[0-9]+)/?(?P<shorttitle>.*)$' _VALID_URL = r'^(?:https?://)?(?:www\.)?collegehumor\.com/(video|embed|e)/(?P<videoid>[0-9]+)/?(?P<shorttitle>.*)$'
_TESTS = [{ _TESTS = [
{
'url': 'http://www.collegehumor.com/video/6902724/comic-con-cosplay-catastrophe', 'url': 'http://www.collegehumor.com/video/6902724/comic-con-cosplay-catastrophe',
'md5': 'dcc0f5c1c8be98dc33889a191f4c26bd', 'md5': 'dcc0f5c1c8be98dc33889a191f4c26bd',
'info_dict': { 'info_dict': {
@@ -21,8 +22,7 @@ class CollegeHumorIE(InfoExtractor):
'age_limit': 13, 'age_limit': 13,
'duration': 187, 'duration': 187,
}, },
}, }, {
{
'url': 'http://www.collegehumor.com/video/3505939/font-conference', 'url': 'http://www.collegehumor.com/video/3505939/font-conference',
'md5': '72fa701d8ef38664a4dbb9e2ab721816', 'md5': '72fa701d8ef38664a4dbb9e2ab721816',
'info_dict': { 'info_dict': {
@@ -33,9 +33,8 @@ class CollegeHumorIE(InfoExtractor):
'age_limit': 10, 'age_limit': 10,
'duration': 179, 'duration': 179,
}, },
}, }, {
# embedded youtube video # embedded youtube video
{
'url': 'http://www.collegehumor.com/embed/6950306', 'url': 'http://www.collegehumor.com/embed/6950306',
'info_dict': { 'info_dict': {
'id': 'Z-bao9fg6Yc', 'id': 'Z-bao9fg6Yc',

View File

@@ -296,9 +296,11 @@ class InfoExtractor(object):
content = self._webpage_read_content(urlh, url_or_request, video_id, note, errnote, fatal) content = self._webpage_read_content(urlh, url_or_request, video_id, note, errnote, fatal)
return (content, urlh) return (content, urlh)
def _webpage_read_content(self, urlh, url_or_request, video_id, note=None, errnote=None, fatal=True): def _webpage_read_content(self, urlh, url_or_request, video_id, note=None, errnote=None, fatal=True, prefix=None):
content_type = urlh.headers.get('Content-Type', '') content_type = urlh.headers.get('Content-Type', '')
webpage_bytes = urlh.read() webpage_bytes = urlh.read()
if prefix is not None:
webpage_bytes = prefix + webpage_bytes
m = re.match(r'[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+\s*;\s*charset=(.+)', content_type) m = re.match(r'[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+\s*;\s*charset=(.+)', content_type)
if m: if m:
encoding = m.group(1) encoding = m.group(1)
@@ -434,6 +436,7 @@ class InfoExtractor(object):
if video_id is not None: if video_id is not None:
video_info['id'] = video_id video_info['id'] = video_id
return video_info return video_info
@staticmethod @staticmethod
def playlist_result(entries, playlist_id=None, playlist_title=None): def playlist_result(entries, playlist_id=None, playlist_title=None):
"""Returns a playlist""" """Returns a playlist"""

View File

@@ -69,11 +69,9 @@ class CrunchyrollIE(SubtitlesInfoExtractor):
login_request.add_header('Content-Type', 'application/x-www-form-urlencoded') login_request.add_header('Content-Type', 'application/x-www-form-urlencoded')
self._download_webpage(login_request, None, False, 'Wrong login info') self._download_webpage(login_request, None, False, 'Wrong login info')
def _real_initialize(self): def _real_initialize(self):
self._login() self._login()
def _decrypt_subtitles(self, data, iv, id): def _decrypt_subtitles(self, data, iv, id):
data = bytes_to_intlist(data) data = bytes_to_intlist(data)
iv = bytes_to_intlist(iv) iv = bytes_to_intlist(iv)
@@ -99,8 +97,10 @@ class CrunchyrollIE(SubtitlesInfoExtractor):
return shaHash + [0] * 12 return shaHash + [0] * 12
key = obfuscate_key(id) key = obfuscate_key(id)
class Counter: class Counter:
__value = iv __value = iv
def next_value(self): def next_value(self):
temp = self.__value temp = self.__value
self.__value = inc(self.__value) self.__value = inc(self.__value)
@@ -248,7 +248,8 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
subtitles = {} subtitles = {}
sub_format = self._downloader.params.get('subtitlesformat', 'srt') sub_format = self._downloader.params.get('subtitlesformat', 'srt')
for sub_id, sub_name in re.findall(r'\?ssid=([0-9]+)" title="([^"]+)', webpage): for sub_id, sub_name in re.findall(r'\?ssid=([0-9]+)" title="([^"]+)', webpage):
sub_page = self._download_webpage('http://www.crunchyroll.com/xml/?req=RpcApiSubtitle_GetXml&subtitle_script_id='+sub_id,\ sub_page = self._download_webpage(
'http://www.crunchyroll.com/xml/?req=RpcApiSubtitle_GetXml&subtitle_script_id=' + sub_id,
video_id, note='Downloading subtitles for ' + sub_name) video_id, note='Downloading subtitles for ' + sub_name)
id = self._search_regex(r'id=\'([0-9]+)', sub_page, 'subtitle_id', fatal=False) id = self._search_regex(r'id=\'([0-9]+)', sub_page, 'subtitle_id', fatal=False)
iv = self._search_regex(r'<iv>([^<]+)', sub_page, 'subtitle_iv', fatal=False) iv = self._search_regex(r'<iv>([^<]+)', sub_page, 'subtitle_iv', fatal=False)

View File

@@ -18,6 +18,7 @@ from ..utils import (
unescapeHTML, unescapeHTML,
) )
class DailymotionBaseInfoExtractor(InfoExtractor): class DailymotionBaseInfoExtractor(InfoExtractor):
@staticmethod @staticmethod
def _build_request(url): def _build_request(url):
@@ -27,6 +28,7 @@ class DailymotionBaseInfoExtractor(InfoExtractor):
request.add_header('Cookie', 'ff=off') request.add_header('Cookie', 'ff=off')
return request return request
class DailymotionIE(DailymotionBaseInfoExtractor, SubtitlesInfoExtractor): class DailymotionIE(DailymotionBaseInfoExtractor, SubtitlesInfoExtractor):
"""Information Extractor for Dailymotion""" """Information Extractor for Dailymotion"""

View File

@@ -11,15 +11,15 @@ from ..utils import url_basename
class DropboxIE(InfoExtractor): class DropboxIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?dropbox[.]com/sh?/(?P<id>[a-zA-Z0-9]{15})/.*' _VALID_URL = r'https?://(?:www\.)?dropbox[.]com/sh?/(?P<id>[a-zA-Z0-9]{15})/.*'
_TESTS = [{ _TESTS = [
{
'url': 'https://www.dropbox.com/s/nelirfsxnmcfbfh/youtube-dl%20test%20video%20%27%C3%A4%22BaW_jenozKc.mp4?dl=0', 'url': 'https://www.dropbox.com/s/nelirfsxnmcfbfh/youtube-dl%20test%20video%20%27%C3%A4%22BaW_jenozKc.mp4?dl=0',
'info_dict': { 'info_dict': {
'id': 'nelirfsxnmcfbfh', 'id': 'nelirfsxnmcfbfh',
'ext': 'mp4', 'ext': 'mp4',
'title': 'youtube-dl test video \'ä"BaW_jenozKc' 'title': 'youtube-dl test video \'ä"BaW_jenozKc'
} }
}, }, {
{
'url': 'https://www.dropbox.com/sh/662glsejgzoj9sr/AAByil3FGH9KFNZ13e08eSa1a/Pregame%20Ceremony%20Program%20PA%2020140518.m4v', 'url': 'https://www.dropbox.com/sh/662glsejgzoj9sr/AAByil3FGH9KFNZ13e08eSa1a/Pregame%20Ceremony%20Program%20PA%2020140518.m4v',
'only_matching': True, 'only_matching': True,
}, },

View File

@@ -125,7 +125,7 @@ class EightTracksIE(InfoExtractor):
info = { info = {
'id': compat_str(track_data['id']), 'id': compat_str(track_data['id']),
'url': track_data['track_file_stream_url'], 'url': track_data['track_file_stream_url'],
'title': track_data['performer'] + u' - ' + track_data['name'], 'title': track_data['performer'] + ' - ' + track_data['name'],
'raw_title': track_data['name'], 'raw_title': track_data['name'],
'uploader_id': data['user']['login'], 'uploader_id': data['user']['login'],
'ext': 'm4a', 'ext': 'm4a',

View File

@@ -26,6 +26,19 @@ class FranceTVBaseInfoExtractor(InfoExtractor):
if info.get('status') == 'NOK': if info.get('status') == 'NOK':
raise ExtractorError( raise ExtractorError(
'%s returned error: %s' % (self.IE_NAME, info['message']), expected=True) '%s returned error: %s' % (self.IE_NAME, info['message']), expected=True)
allowed_countries = info['videos'][0].get('geoblocage')
if allowed_countries:
georestricted = True
geo_info = self._download_json(
'http://geo.francetv.fr/ws/edgescape.json', video_id,
'Downloading geo restriction info')
country = geo_info['reponse']['geo_info']['country_code']
if country not in allowed_countries:
raise ExtractorError(
'The video is not available from your location',
expected=True)
else:
georestricted = False
formats = [] formats = []
for video in info['videos']: for video in info['videos']:
@@ -36,6 +49,10 @@ class FranceTVBaseInfoExtractor(InfoExtractor):
continue continue
format_id = video['format'] format_id = video['format']
if video_url.endswith('.f4m'): if video_url.endswith('.f4m'):
if georestricted:
# See https://github.com/rg3/youtube-dl/issues/3963
# m3u8 urls work fine
continue
video_url_parsed = compat_urllib_parse_urlparse(video_url) video_url_parsed = compat_urllib_parse_urlparse(video_url)
f4m_url = self._download_webpage( f4m_url = self._download_webpage(
'http://hdfauth.francetv.fr/esi/urltokengen2.html?url=%s' % video_url_parsed.path, 'http://hdfauth.francetv.fr/esi/urltokengen2.html?url=%s' % video_url_parsed.path,

View File

@@ -11,7 +11,7 @@ class GamekingsIE(InfoExtractor):
'url': 'http://www.gamekings.tv/videos/phoenix-wright-ace-attorney-dual-destinies-review/', 'url': 'http://www.gamekings.tv/videos/phoenix-wright-ace-attorney-dual-destinies-review/',
# MD5 is flaky, seems to change regularly # MD5 is flaky, seems to change regularly
# 'md5': '2f32b1f7b80fdc5cb616efb4f387f8a3', # 'md5': '2f32b1f7b80fdc5cb616efb4f387f8a3',
u'info_dict': { 'info_dict': {
'id': '20130811', 'id': '20130811',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Phoenix Wright: Ace Attorney \u2013 Dual Destinies Review', 'title': 'Phoenix Wright: Ace Attorney \u2013 Dual Destinies Review',

View File

@@ -445,6 +445,30 @@ class GenericIE(InfoExtractor):
'title': 'Rosetta #CometLanding webcast HL 10', 'title': 'Rosetta #CometLanding webcast HL 10',
} }
}, },
# LazyYT
{
'url': 'http://discourse.ubuntu.com/t/unity-8-desktop-mode-windows-on-mir/1986',
'info_dict': {
'title': 'Unity 8 desktop-mode windows on Mir! - Ubuntu Discourse',
},
'playlist_mincount': 2,
},
# Direct link with incorrect MIME type
{
'url': 'http://ftp.nluug.nl/video/nluug/2014-11-20_nj14/zaal-2/5_Lennart_Poettering_-_Systemd.webm',
'md5': '4ccbebe5f36706d85221f204d7eb5913',
'info_dict': {
'url': 'http://ftp.nluug.nl/video/nluug/2014-11-20_nj14/zaal-2/5_Lennart_Poettering_-_Systemd.webm',
'id': '5_Lennart_Poettering_-_Systemd',
'ext': 'webm',
'title': '5_Lennart_Poettering_-_Systemd',
'upload_date': '20141120',
},
'expected_warnings': [
'URL could be a direct video link, returning it as such.'
]
}
] ]
def report_following_redirect(self, new_url): def report_following_redirect(self, new_url):
@@ -537,9 +561,9 @@ class GenericIE(InfoExtractor):
if default_search in ('error', 'fixup_error'): if default_search in ('error', 'fixup_error'):
raise ExtractorError( raise ExtractorError(
('%r is not a valid URL. ' '%r is not a valid URL. '
'Set --default-search "ytsearch" (or run youtube-dl "ytsearch:%s" ) to search YouTube' 'Set --default-search "ytsearch" (or run youtube-dl "ytsearch:%s" ) to search YouTube'
) % (url, url), expected=True) % (url, url), expected=True)
else: else:
if ':' not in default_search: if ':' not in default_search:
default_search += ':' default_search += ':'
@@ -598,10 +622,28 @@ class GenericIE(InfoExtractor):
if not self._downloader.params.get('test', False) and not is_intentional: if not self._downloader.params.get('test', False) and not is_intentional:
self._downloader.report_warning('Falling back on generic information extractor.') self._downloader.report_warning('Falling back on generic information extractor.')
if full_response: if not full_response:
webpage = self._webpage_read_content(full_response, url, video_id) full_response = self._request_webpage(url, video_id)
else:
webpage = self._download_webpage(url, video_id) # Maybe it's a direct link to a video?
# Be careful not to download the whole thing!
first_bytes = full_response.read(512)
if not re.match(r'^\s*<', first_bytes.decode('utf-8', 'replace')):
self._downloader.report_warning(
'URL could be a direct video link, returning it as such.')
upload_date = unified_strdate(
head_response.headers.get('Last-Modified'))
return {
'id': video_id,
'title': os.path.splitext(url_basename(url))[0],
'direct': True,
'url': url,
'upload_date': upload_date,
}
webpage = self._webpage_read_content(
full_response, url, video_id, prefix=first_bytes)
self.report_extraction(video_id) self.report_extraction(video_id)
# Is it an RSS feed? # Is it an RSS feed?
@@ -702,6 +744,12 @@ class GenericIE(InfoExtractor):
return _playlist_from_matches( return _playlist_from_matches(
matches, lambda m: unescapeHTML(m[1])) matches, lambda m: unescapeHTML(m[1]))
# Look for lazyYT YouTube embed
matches = re.findall(
r'class="lazyYT" data-youtube-id="([^"]+)"', webpage)
if matches:
return _playlist_from_matches(matches, lambda m: unescapeHTML(m))
# Look for embedded Dailymotion player # Look for embedded Dailymotion player
matches = re.findall( matches = re.findall(
r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?dailymotion\.com/embed/video/.+?)\1', webpage) r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?dailymotion\.com/embed/video/.+?)\1', webpage)
@@ -1025,4 +1073,3 @@ class GenericIE(InfoExtractor):
'_type': 'playlist', '_type': 'playlist',
'entries': entries, 'entries': entries,
} }

View File

@@ -9,14 +9,15 @@ from ..utils import (
determine_ext, determine_ext,
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
int_or_none,
) )
class GorillaVidIE(InfoExtractor): class GorillaVidIE(InfoExtractor):
IE_DESC = 'GorillaVid.in, daclips.in and movpod.in' IE_DESC = 'GorillaVid.in, daclips.in, movpod.in and fastvideo.in'
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
https?://(?P<host>(?:www\.)? https?://(?P<host>(?:www\.)?
(?:daclips\.in|gorillavid\.in|movpod\.in))/ (?:daclips\.in|gorillavid\.in|movpod\.in|fastvideo\.in))/
(?:embed-)?(?P<id>[0-9a-zA-Z]+)(?:-[0-9]+x[0-9]+\.html)? (?:embed-)?(?P<id>[0-9a-zA-Z]+)(?:-[0-9]+x[0-9]+\.html)?
''' '''
@@ -49,6 +50,16 @@ class GorillaVidIE(InfoExtractor):
'title': 'Micro Pig piglets ready on 16th July 2009-bG0PdrCdxUc', 'title': 'Micro Pig piglets ready on 16th July 2009-bG0PdrCdxUc',
'thumbnail': 're:http://.*\.jpg', 'thumbnail': 're:http://.*\.jpg',
} }
}, {
# video with countdown timeout
'url': 'http://fastvideo.in/1qmdn1lmsmbw',
'md5': '8b87ec3f6564a3108a0e8e66594842ba',
'info_dict': {
'id': '1qmdn1lmsmbw',
'ext': 'mp4',
'title': 'Man of Steel - Trailer',
'thumbnail': 're:http://.*\.jpg',
},
}, { }, {
'url': 'http://movpod.in/0wguyyxi1yca', 'url': 'http://movpod.in/0wguyyxi1yca',
'only_matching': True, 'only_matching': True,
@@ -71,6 +82,12 @@ class GorillaVidIE(InfoExtractor):
''', webpage)) ''', webpage))
if fields['op'] == 'download1': if fields['op'] == 'download1':
countdown = int_or_none(self._search_regex(
r'<span id="countdown_str">(?:[Ww]ait)?\s*<span id="cxc">(\d+)</span>\s*(?:seconds?)?</span>',
webpage, 'countdown', default=None))
if countdown:
self._sleep(countdown, video_id)
post = compat_urllib_parse.urlencode(fields) post = compat_urllib_parse.urlencode(fields)
req = compat_urllib_request.Request(url, post) req = compat_urllib_request.Request(url, post)
@@ -78,9 +95,13 @@ class GorillaVidIE(InfoExtractor):
webpage = self._download_webpage(req, video_id, 'Downloading video page') webpage = self._download_webpage(req, video_id, 'Downloading video page')
title = self._search_regex(r'style="z-index: [0-9]+;">([^<]+)</span>', webpage, 'title') title = self._search_regex(
video_url = self._search_regex(r'file\s*:\s*\'(http[^\']+)\',', webpage, 'file url') r'style="z-index: [0-9]+;">([^<]+)</span>',
thumbnail = self._search_regex(r'image\s*:\s*\'(http[^\']+)\',', webpage, 'thumbnail', fatal=False) webpage, 'title', default=None) or self._og_search_title(webpage)
video_url = self._search_regex(
r'file\s*:\s*["\'](http[^"\']+)["\'],', webpage, 'file url')
thumbnail = self._search_regex(
r'image\s*:\s*["\'](http[^"\']+)["\'],', webpage, 'thumbnail', fatal=False)
formats = [{ formats = [{
'format_id': 'sd', 'format_id': 'sd',

View File

@@ -1,12 +1,13 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
import base64 import base64
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
)
from ..utils import (
ExtractorError, ExtractorError,
HEADRequest, HEADRequest,
) )
@@ -16,25 +17,24 @@ class HotNewHipHopIE(InfoExtractor):
_VALID_URL = r'http://www\.hotnewhiphop\.com/.*\.(?P<id>.*)\.html' _VALID_URL = r'http://www\.hotnewhiphop\.com/.*\.(?P<id>.*)\.html'
_TEST = { _TEST = {
'url': 'http://www.hotnewhiphop.com/freddie-gibbs-lay-it-down-song.1435540.html', 'url': 'http://www.hotnewhiphop.com/freddie-gibbs-lay-it-down-song.1435540.html',
'file': '1435540.mp3',
'md5': '2c2cd2f76ef11a9b3b581e8b232f3d96', 'md5': '2c2cd2f76ef11a9b3b581e8b232f3d96',
'info_dict': { 'info_dict': {
'id': '1435540',
'ext': 'mp3',
'title': 'Freddie Gibbs - Lay It Down' 'title': 'Freddie Gibbs - Lay It Down'
} }
} }
def _real_extract(self, url): def _real_extract(self, url):
m = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = m.group('id') webpage = self._download_webpage(url, video_id)
webpage_src = self._download_webpage(url, video_id)
video_url_base64 = self._search_regex( video_url_base64 = self._search_regex(
r'data-path="(.*?)"', webpage_src, u'video URL', fatal=False) r'data-path="(.*?)"', webpage, 'video URL', default=None)
if video_url_base64 is None: if video_url_base64 is None:
video_url = self._search_regex( video_url = self._search_regex(
r'"contentUrl" content="(.*?)"', webpage_src, u'video URL') r'"contentUrl" content="(.*?)"', webpage, 'content URL')
return self.url_result(video_url, ie='Youtube') return self.url_result(video_url, ie='Youtube')
reqdata = compat_urllib_parse.urlencode([ reqdata = compat_urllib_parse.urlencode([
@@ -59,11 +59,11 @@ class HotNewHipHopIE(InfoExtractor):
if video_url.endswith('.html'): if video_url.endswith('.html'):
raise ExtractorError('Redirect failed') raise ExtractorError('Redirect failed')
video_title = self._og_search_title(webpage_src).strip() video_title = self._og_search_title(webpage).strip()
return { return {
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'title': video_title, 'title': video_title,
'thumbnail': self._og_search_thumbnail(webpage_src), 'thumbnail': self._og_search_thumbnail(webpage),
} }

View File

@@ -63,8 +63,10 @@ class IGNIE(InfoExtractor):
'id': '078fdd005f6d3c02f63d795faa1b984f', 'id': '078fdd005f6d3c02f63d795faa1b984f',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Rewind Theater - Wild Trailer Gamescom 2014', 'title': 'Rewind Theater - Wild Trailer Gamescom 2014',
'description': 'Giant skeletons, bloody hunts, and captivating' 'description': (
' natural beauty take our breath away.', 'Giant skeletons, bloody hunts, and captivating'
' natural beauty take our breath away.'
),
}, },
}, },
] ]

View File

@@ -58,9 +58,13 @@ class InternetVideoArchiveIE(InfoExtractor):
item = info.find('channel/item') item = info.find('channel/item')
def _bp(p): def _bp(p):
return xpath_with_ns(p, return xpath_with_ns(
{'media': 'http://search.yahoo.com/mrss/', p,
'jwplayer': 'http://developer.longtailvideo.com/trac/wiki/FlashFormats'}) {
'media': 'http://search.yahoo.com/mrss/',
'jwplayer': 'http://developer.longtailvideo.com/trac/wiki/FlashFormats',
}
)
formats = [] formats = []
for content in item.findall(_bp('media:group/media:content')): for content in item.findall(_bp('media:group/media:content')):
attr = content.attrib attr = content.attrib

View File

@@ -45,4 +45,3 @@ class JadoreCettePubIE(InfoExtractor):
'title': title, 'title': title,
'description': description, 'description': description,
} }

View File

@@ -13,8 +13,10 @@ class KickStarterIE(InfoExtractor):
'id': '1404461844', 'id': '1404461844',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Intersection: The Story of Josh Grant by Kyle Cowling', 'title': 'Intersection: The Story of Josh Grant by Kyle Cowling',
'description': 'A unique motocross documentary that examines the ' 'description': (
'life and mind of one of sports most elite athletes: Josh Grant.', 'A unique motocross documentary that examines the '
'life and mind of one of sports most elite athletes: Josh Grant.'
),
}, },
}, { }, {
'note': 'Embedded video (not using the native kickstarter video service)', 'note': 'Embedded video (not using the native kickstarter video service)',

View File

@@ -30,4 +30,3 @@ class Ku6IE(InfoExtractor):
'title': title, 'title': title,
'url': downloadUrl 'url': downloadUrl
} }

View File

@@ -75,4 +75,3 @@ class Laola1TvIE(InfoExtractor):
'categories': categories, 'categories': categories,
'ext': 'mp4', 'ext': 'mp4',
} }

View File

@@ -19,8 +19,7 @@ class LiveLeakIE(InfoExtractor):
'uploader': 'ljfriel2', 'uploader': 'ljfriel2',
'title': 'Most unlucky car accident' 'title': 'Most unlucky car accident'
} }
}, }, {
{
'url': 'http://www.liveleak.com/view?i=f93_1390833151', 'url': 'http://www.liveleak.com/view?i=f93_1390833151',
'md5': 'd3f1367d14cc3c15bf24fbfbe04b9abf', 'md5': 'd3f1367d14cc3c15bf24fbfbe04b9abf',
'info_dict': { 'info_dict': {
@@ -30,8 +29,7 @@ class LiveLeakIE(InfoExtractor):
'uploader': 'ARD_Stinkt', 'uploader': 'ARD_Stinkt',
'title': 'German Television does first Edward Snowden Interview (ENGLISH)', 'title': 'German Television does first Edward Snowden Interview (ENGLISH)',
} }
}, }, {
{
'url': 'http://www.liveleak.com/view?i=4f7_1392687779', 'url': 'http://www.liveleak.com/view?i=4f7_1392687779',
'md5': '42c6d97d54f1db107958760788c5f48f', 'md5': '42c6d97d54f1db107958760788c5f48f',
'info_dict': { 'info_dict': {

View File

@@ -7,6 +7,7 @@ from ..utils import (
compat_urllib_parse, compat_urllib_parse,
) )
class MalemotionIE(InfoExtractor): class MalemotionIE(InfoExtractor):
_VALID_URL = r'^(?:https?://)?malemotion\.com/video/(.+?)\.(?P<id>.+?)(#|$)' _VALID_URL = r'^(?:https?://)?malemotion\.com/video/(.+?)\.(?P<id>.+?)(#|$)'
_TEST = { _TEST = {

View File

@@ -49,7 +49,7 @@ class MooshareIE(InfoExtractor):
page = self._download_webpage(url, video_id, 'Downloading page') page = self._download_webpage(url, video_id, 'Downloading page')
if re.search(r'>Video Not Found or Deleted<', page) is not None: if re.search(r'>Video Not Found or Deleted<', page) is not None:
raise ExtractorError(u'Video %s does not exist' % video_id, expected=True) raise ExtractorError('Video %s does not exist' % video_id, expected=True)
hash_key = self._html_search_regex(r'<input type="hidden" name="hash" value="([^"]+)">', page, 'hash') hash_key = self._html_search_regex(r'<input type="hidden" name="hash" value="([^"]+)">', page, 'hash')
title = self._html_search_regex(r'(?m)<div class="blockTitle">\s*<h2>Watch ([^<]+)</h2>', page, 'title') title = self._html_search_regex(r'(?m)<div class="blockTitle">\s*<h2>Watch ([^<]+)</h2>', page, 'title')

View File

@@ -164,7 +164,7 @@ class MTVServicesInfoExtractor(InfoExtractor):
if mgid is None or ':' not in mgid: if mgid is None or ':' not in mgid:
mgid = self._search_regex( mgid = self._search_regex(
[r'data-mgid="(.*?)"', r'swfobject.embedSWF\(".*?(mgid:.*?)"'], [r'data-mgid="(.*?)"', r'swfobject.embedSWF\(".*?(mgid:.*?)"'],
webpage, u'mgid') webpage, 'mgid')
return self._get_videos_info(mgid) return self._get_videos_info(mgid)
@@ -245,7 +245,7 @@ class MTVIE(MTVServicesInfoExtractor):
m_vevo = re.search(r'isVevoVideo = true;.*?vevoVideoId = "(.*?)";', m_vevo = re.search(r'isVevoVideo = true;.*?vevoVideoId = "(.*?)";',
webpage, re.DOTALL) webpage, re.DOTALL)
if m_vevo: if m_vevo:
vevo_id = m_vevo.group(1); vevo_id = m_vevo.group(1)
self.to_screen('Vevo video detected: %s' % vevo_id) self.to_screen('Vevo video detected: %s' % vevo_id)
return self.url_result('vevo:%s' % vevo_id, ie='Vevo') return self.url_result('vevo:%s' % vevo_id, ie='Vevo')

View File

@@ -73,4 +73,3 @@ class MuenchenTVIE(InfoExtractor):
'is_live': True, 'is_live': True,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
} }

View File

@@ -1,47 +1,48 @@
import re from __future__ import unicode_literals
import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_urllib_parse, compat_urllib_parse,
determine_ext,
) )
class MuzuTVIE(InfoExtractor): class MuzuTVIE(InfoExtractor):
_VALID_URL = r'https?://www\.muzu\.tv/(.+?)/(.+?)/(?P<id>\d+)' _VALID_URL = r'https?://www\.muzu\.tv/(.+?)/(.+?)/(?P<id>\d+)'
IE_NAME = u'muzu.tv' IE_NAME = 'muzu.tv'
_TEST = { _TEST = {
u'url': u'http://www.muzu.tv/defected/marcashken-featuring-sos-cat-walk-original-mix-music-video/1981454/', 'url': 'http://www.muzu.tv/defected/marcashken-featuring-sos-cat-walk-original-mix-music-video/1981454/',
u'file': u'1981454.mp4', 'md5': '98f8b2c7bc50578d6a0364fff2bfb000',
u'md5': u'98f8b2c7bc50578d6a0364fff2bfb000', 'info_dict': {
u'info_dict': { 'id': '1981454',
u'title': u'Cat Walk (Original Mix)', 'ext': 'mp4',
u'description': u'md5:90e868994de201b2570e4e5854e19420', 'title': 'Cat Walk (Original Mix)',
u'uploader': u'MarcAshken featuring SOS', 'description': 'md5:90e868994de201b2570e4e5854e19420',
'uploader': 'MarcAshken featuring SOS',
}, },
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
info_data = compat_urllib_parse.urlencode({'format': 'json', info_data = compat_urllib_parse.urlencode({
'format': 'json',
'url': url, 'url': url,
}) })
video_info_page = self._download_webpage('http://www.muzu.tv/api/oembed/?%s' % info_data, info = self._download_json(
video_id, u'Downloading video info') 'http://www.muzu.tv/api/oembed/?%s' % info_data,
info = json.loads(video_info_page) video_id, 'Downloading video info')
player_info_page = self._download_webpage('http://player.muzu.tv/player/playerInit?ai=%s' % video_id, player_info = self._download_json(
video_id, u'Downloading player info') 'http://player.muzu.tv/player/playerInit?ai=%s' % video_id,
video_info = json.loads(player_info_page)['videos'][0] video_id, 'Downloading player info')
video_info = player_info['videos'][0]
for quality in ['1080', '720', '480', '360']: for quality in ['1080', '720', '480', '360']:
if video_info.get('v%s' % quality): if video_info.get('v%s' % quality):
break break
data = compat_urllib_parse.urlencode({'ai': video_id, data = compat_urllib_parse.urlencode({
'ai': video_id,
# Even if each time you watch a video the hash changes, # Even if each time you watch a video the hash changes,
# it seems to work for different videos, and it will work # it seems to work for different videos, and it will work
# even if you use any non empty string as a hash # even if you use any non empty string as a hash
@@ -49,15 +50,15 @@ class MuzuTVIE(InfoExtractor):
'device': 'web', 'device': 'web',
'qv': quality, 'qv': quality,
}) })
video_url_page = self._download_webpage('http://player.muzu.tv/player/requestVideo?%s' % data, video_url_info = self._download_json(
video_id, u'Downloading video url') 'http://player.muzu.tv/player/requestVideo?%s' % data,
video_url_info = json.loads(video_url_page) video_id, 'Downloading video url')
video_url = video_url_info['url'] video_url = video_url_info['url']
return {'id': video_id, return {
'id': video_id,
'title': info['title'], 'title': info['title'],
'url': video_url, 'url': video_url,
'ext': determine_ext(video_url),
'thumbnail': info['thumbnail_url'], 'thumbnail': info['thumbnail_url'],
'description': info['description'], 'description': info['description'],
'uploader': info['author_name'], 'uploader': info['author_name'],

View File

@@ -4,7 +4,7 @@ import re
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_str, compat_str,
) )
@@ -52,8 +52,8 @@ class MySpaceIE(InfoExtractor):
if mobj.group('mediatype').startswith('music/song'): if mobj.group('mediatype').startswith('music/song'):
# songs don't store any useful info in the 'context' variable # songs don't store any useful info in the 'context' variable
def search_data(name): def search_data(name):
return self._search_regex(r'data-%s="(.*?)"' % name, webpage, return self._search_regex(
name) r'data-%s="(.*?)"' % name, webpage, name)
streamUrl = search_data('stream-url') streamUrl = search_data('stream-url')
info = { info = {
'id': video_id, 'id': video_id,
@@ -62,8 +62,8 @@ class MySpaceIE(InfoExtractor):
'thumbnail': self._og_search_thumbnail(webpage), 'thumbnail': self._og_search_thumbnail(webpage),
} }
else: else:
context = json.loads(self._search_regex(r'context = ({.*?});', webpage, context = json.loads(self._search_regex(
u'context')) r'context = ({.*?});', webpage, 'context'))
video = context['video'] video = context['video']
streamUrl = video['streamUrl'] streamUrl = video['streamUrl']
info = { info = {

View File

@@ -173,4 +173,3 @@ class MyVideoIE(InfoExtractor):
'play_path': video_playpath, 'play_path': video_playpath,
'player_url': video_swfobj, 'player_url': video_swfobj,
} }

View File

@@ -39,7 +39,6 @@ class NBAIE(InfoExtractor):
duration = parse_duration( duration = parse_duration(
self._html_search_meta('duration', webpage, 'duration', fatal=False)) self._html_search_meta('duration', webpage, 'duration', fatal=False))
return { return {
'id': shortened_video_id, 'id': shortened_video_id,
'url': video_url, 'url': video_url,

View File

@@ -4,9 +4,11 @@ import re
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_urlparse, compat_urlparse,
compat_urllib_parse, compat_urllib_parse,
)
from ..utils import (
unified_strdate, unified_strdate,
) )
@@ -122,7 +124,7 @@ class NHLVideocenterIE(NHLBaseInfoExtractor):
response = self._download_webpage(request_url, playlist_title) response = self._download_webpage(request_url, playlist_title)
response = self._fix_json(response) response = self._fix_json(response)
if not response.strip(): if not response.strip():
self._downloader.report_warning(u'Got an empty reponse, trying ' self._downloader.report_warning('Got an empty reponse, trying '
'adding the "newvideos" parameter') 'adding the "newvideos" parameter')
response = self._download_webpage(request_url + '&newvideos=true', response = self._download_webpage(request_url + '&newvideos=true',
playlist_title) playlist_title)

View File

@@ -27,8 +27,7 @@ class NineGagIE(InfoExtractor):
"thumbnail": "re:^https?://", "thumbnail": "re:^https?://",
}, },
'add_ie': ['Youtube'] 'add_ie': ['Youtube']
}, }, {
{
'url': 'http://9gag.tv/p/KklwM/alternate-banned-opening-scene-of-gravity?ref=fsidebar', 'url': 'http://9gag.tv/p/KklwM/alternate-banned-opening-scene-of-gravity?ref=fsidebar',
'info_dict': { 'info_dict': {
'id': 'KklwM', 'id': 'KklwM',

View File

@@ -97,4 +97,3 @@ class OoyalaIE(InfoExtractor):
} }
else: else:
return self._extract_result(videos_info[0], videos_more_info) return self._extract_result(videos_info[0], videos_more_info)

View File

@@ -6,6 +6,7 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import int_or_none from ..utils import int_or_none
class PodomaticIE(InfoExtractor): class PodomaticIE(InfoExtractor):
IE_NAME = 'podomatic' IE_NAME = 'podomatic'
_VALID_URL = r'^(?P<proto>https?)://(?P<channel>[^.]+)\.podomatic\.com/entry/(?P<id>[^?]+)' _VALID_URL = r'^(?P<proto>https?)://(?P<channel>[^.]+)\.podomatic\.com/entry/(?P<id>[^?]+)'

View File

@@ -1,7 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -9,32 +7,23 @@ class RedTubeIE(InfoExtractor):
_VALID_URL = r'http://(?:www\.)?redtube\.com/(?P<id>[0-9]+)' _VALID_URL = r'http://(?:www\.)?redtube\.com/(?P<id>[0-9]+)'
_TEST = { _TEST = {
'url': 'http://www.redtube.com/66418', 'url': 'http://www.redtube.com/66418',
'file': '66418.mp4',
# md5 varies from time to time, as in
# https://travis-ci.org/rg3/youtube-dl/jobs/14052463#L295
#'md5': u'7b8c22b5e7098a3e1c09709df1126d2d',
'info_dict': { 'info_dict': {
'id': '66418',
'ext': 'mp4',
"title": "Sucked on a toilet", "title": "Sucked on a toilet",
"age_limit": 18, "age_limit": 18,
} }
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
video_extension = 'mp4'
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
self.report_extraction(video_id)
video_url = self._html_search_regex( video_url = self._html_search_regex(
r'<source src="(.+?)" type="video/mp4">', webpage, u'video URL') r'<source src="(.+?)" type="video/mp4">', webpage, 'video URL')
video_title = self._html_search_regex( video_title = self._html_search_regex(
r'<h1 class="videoTitle[^"]*">(.+?)</h1>', r'<h1 class="videoTitle[^"]*">(.+?)</h1>',
webpage, u'title') webpage, 'title')
video_thumbnail = self._og_search_thumbnail(webpage) video_thumbnail = self._og_search_thumbnail(webpage)
# No self-labeling, but they describe themselves as # No self-labeling, but they describe themselves as
@@ -44,7 +33,7 @@ class RedTubeIE(InfoExtractor):
return { return {
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'ext': video_extension, 'ext': 'mp4',
'title': video_title, 'title': video_title,
'thumbnail': video_thumbnail, 'thumbnail': video_thumbnail,
'age_limit': age_limit, 'age_limit': age_limit,

View File

@@ -41,4 +41,3 @@ class RingTVIE(InfoExtractor):
'thumbnail': thumbnail_url, 'thumbnail': thumbnail_url,
'description': description, 'description': description,
} }

View File

@@ -44,7 +44,7 @@ class RtlXlIE(InfoExtractor):
formats = self._extract_m3u8_formats(m3u8_url, uuid, ext='mp4') formats = self._extract_m3u8_formats(m3u8_url, uuid, ext='mp4')
video_urlpart = videopath.split('/flash/')[1][:-4] video_urlpart = videopath.split('/flash/')[1][:-5]
PG_URL_TEMPLATE = 'http://pg.us.rtl.nl/rtlxl/network/%s/progressive/%s.mp4' PG_URL_TEMPLATE = 'http://pg.us.rtl.nl/rtlxl/network/%s/progressive/%s.mp4'
formats.extend([ formats.extend([

View File

@@ -54,7 +54,6 @@ def _decrypt_url(png):
return url return url
class RTVEALaCartaIE(InfoExtractor): class RTVEALaCartaIE(InfoExtractor):
IE_NAME = 'rtve.es:alacarta' IE_NAME = 'rtve.es:alacarta'
IE_DESC = 'RTVE a la carta' IE_DESC = 'RTVE a la carta'

View File

@@ -27,8 +27,7 @@ class SBSIE(InfoExtractor):
'thumbnail': 're:http://.*\.jpg', 'thumbnail': 're:http://.*\.jpg',
}, },
'add_ies': ['generic'], 'add_ies': ['generic'],
}, }, {
{
'url': 'http://www.sbs.com.au/ondemand/video/320403011771/Dingo-Conservation-The-Feed', 'url': 'http://www.sbs.com.au/ondemand/video/320403011771/Dingo-Conservation-The-Feed',
'only_matching': True, 'only_matching': True,
}] }]

View File

@@ -67,5 +67,3 @@ class ServingSysIE(InfoExtractor):
'title': title, 'title': title,
'entries': entries, 'entries': entries,
} }

View File

@@ -1,7 +1,6 @@
# encoding: utf-8 # encoding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import os.path
import re import re
import json import json
import hashlib import hashlib
@@ -12,15 +11,15 @@ from ..utils import (
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
ExtractorError, ExtractorError,
url_basename,
int_or_none, int_or_none,
unified_strdate,
) )
class SmotriIE(InfoExtractor): class SmotriIE(InfoExtractor):
IE_DESC = 'Smotri.com' IE_DESC = 'Smotri.com'
IE_NAME = 'smotri' IE_NAME = 'smotri'
_VALID_URL = r'^https?://(?:www\.)?(?:smotri\.com/video/view/\?id=|pics\.smotri\.com/(?:player|scrubber_custom8)\.swf\?file=)(?P<videoid>v(?P<realvideoid>[0-9]+)[a-z0-9]{4})' _VALID_URL = r'^https?://(?:www\.)?(?:smotri\.com/video/view/\?id=|pics\.smotri\.com/(?:player|scrubber_custom8)\.swf\?file=)(?P<id>v(?P<realvideoid>[0-9]+)[a-z0-9]{4})'
_NETRC_MACHINE = 'smotri' _NETRC_MACHINE = 'smotri'
_TESTS = [ _TESTS = [
@@ -35,7 +34,6 @@ class SmotriIE(InfoExtractor):
'uploader': 'rbc2008', 'uploader': 'rbc2008',
'uploader_id': 'rbc08', 'uploader_id': 'rbc08',
'upload_date': '20131118', 'upload_date': '20131118',
'description': 'катастрофа с камер видеонаблюдения, видео катастрофа с камер видеонаблюдения',
'thumbnail': 'http://frame6.loadup.ru/8b/a9/2610366.3.3.jpg', 'thumbnail': 'http://frame6.loadup.ru/8b/a9/2610366.3.3.jpg',
}, },
}, },
@@ -50,7 +48,6 @@ class SmotriIE(InfoExtractor):
'uploader': 'Support Photofile@photofile', 'uploader': 'Support Photofile@photofile',
'uploader_id': 'support-photofile', 'uploader_id': 'support-photofile',
'upload_date': '20070704', 'upload_date': '20070704',
'description': 'test, видео test',
'thumbnail': 'http://frame4.loadup.ru/03/ed/57591.2.3.jpg', 'thumbnail': 'http://frame4.loadup.ru/03/ed/57591.2.3.jpg',
}, },
}, },
@@ -66,7 +63,6 @@ class SmotriIE(InfoExtractor):
'uploader_id': 'timoxa40', 'uploader_id': 'timoxa40',
'upload_date': '20100404', 'upload_date': '20100404',
'thumbnail': 'http://frame7.loadup.ru/af/3f/1390466.3.3.jpg', 'thumbnail': 'http://frame7.loadup.ru/af/3f/1390466.3.3.jpg',
'description': 'TOCCA_A_NOI_-_LE_COSE_NON_VANNO_CAMBIAMOLE_ORA-1, видео TOCCA_A_NOI_-_LE_COSE_NON_VANNO_CAMBIAMOLE_ORA-1',
}, },
'params': { 'params': {
'videopassword': 'qwerty', 'videopassword': 'qwerty',
@@ -85,7 +81,6 @@ class SmotriIE(InfoExtractor):
'upload_date': '20101001', 'upload_date': '20101001',
'thumbnail': 'http://frame3.loadup.ru/75/75/1540889.1.3.jpg', 'thumbnail': 'http://frame3.loadup.ru/75/75/1540889.1.3.jpg',
'age_limit': 18, 'age_limit': 18,
'description': 'этот ролик не покажут по ТВ, видео этот ролик не покажут по ТВ',
}, },
'params': { 'params': {
'videopassword': '333' 'videopassword': '333'
@@ -102,17 +97,11 @@ class SmotriIE(InfoExtractor):
'uploader': 'HannahL', 'uploader': 'HannahL',
'uploader_id': 'lisaha95', 'uploader_id': 'lisaha95',
'upload_date': '20090331', 'upload_date': '20090331',
'description': 'Shakira - Don\'t Bother, видео Shakira - Don\'t Bother',
'thumbnail': 'http://frame8.loadup.ru/44/0b/918809.7.3.jpg', 'thumbnail': 'http://frame8.loadup.ru/44/0b/918809.7.3.jpg',
}, },
}, },
] ]
_SUCCESS = 0
_PASSWORD_NOT_VERIFIED = 1
_PASSWORD_DETECTED = 2
_VIDEO_NOT_FOUND = 3
@classmethod @classmethod
def _extract_url(cls, webpage): def _extract_url(cls, webpage):
mobj = re.search( mobj = re.search(
@@ -137,44 +126,44 @@ class SmotriIE(InfoExtractor):
return self._html_search_meta(name, html, display_name) return self._html_search_meta(name, html, display_name)
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('videoid')
real_video_id = mobj.group('realvideoid')
# Download video JSON data video_form = {
video_json_url = 'http://smotri.com/vt.php?id=%s' % real_video_id 'ticket': video_id,
video_json_page = self._download_webpage(video_json_url, video_id, 'Downloading video JSON') 'video_url': '1',
video_json = json.loads(video_json_page) 'frame_url': '1',
'devid': 'LoadupFlashPlayer',
'getvideoinfo': '1',
}
status = video_json['status'] request = compat_urllib_request.Request(
if status == self._VIDEO_NOT_FOUND: 'http://smotri.com/video/view/url/bot/', compat_urllib_parse.urlencode(video_form))
request.add_header('Content-Type', 'application/x-www-form-urlencoded')
video = self._download_json(request, video_id, 'Downloading video JSON')
if video.get('_moderate_no') or not video.get('moderated'):
raise ExtractorError('Video %s has not been approved by moderator' % video_id, expected=True)
if video.get('error'):
raise ExtractorError('Video %s does not exist' % video_id, expected=True) raise ExtractorError('Video %s does not exist' % video_id, expected=True)
elif status == self._PASSWORD_DETECTED: # The video is protected by a password, retry with
# video-password set
video_password = self._downloader.params.get('videopassword', None)
if not video_password:
raise ExtractorError('This video is protected by a password, use the --video-password option', expected=True)
video_json_url += '&md5pass=%s' % hashlib.md5(video_password.encode('utf-8')).hexdigest()
video_json_page = self._download_webpage(video_json_url, video_id, 'Downloading video JSON (video-password set)')
video_json = json.loads(video_json_page)
status = video_json['status']
if status == self._PASSWORD_NOT_VERIFIED:
raise ExtractorError('Video password is invalid', expected=True)
if status != self._SUCCESS: video_url = video.get('_vidURL') or video.get('_vidURL_mp4')
raise ExtractorError('Unexpected status value %s' % status) title = video['title']
thumbnail = video['_imgURL']
# Extract the URL of the video upload_date = unified_strdate(video['added'])
video_url = video_json['file_data'] uploader = video['userNick']
uploader_id = video['userLogin']
duration = int_or_none(video['duration'])
# Video JSON does not provide enough meta data # Video JSON does not provide enough meta data
# We will extract some from the video web page instead # We will extract some from the video web page instead
video_page_url = 'http://smotri.com/video/view/?id=%s' % video_id webpage_url = 'http://smotri.com/video/view/?id=%s' % video_id
video_page = self._download_webpage(video_page_url, video_id, 'Downloading video page') webpage = self._download_webpage(webpage_url, video_id, 'Downloading video page')
# Warning if video is unavailable # Warning if video is unavailable
warning = self._html_search_regex( warning = self._html_search_regex(
r'<div class="videoUnModer">(.*?)</div>', video_page, r'<div class="videoUnModer">(.*?)</div>', webpage,
'warning message', default=None) 'warning message', default=None)
if warning is not None: if warning is not None:
self._downloader.report_warning( self._downloader.report_warning(
@@ -182,84 +171,32 @@ class SmotriIE(InfoExtractor):
(video_id, warning)) (video_id, warning))
# Adult content # Adult content
if re.search('EroConfirmText">', video_page) is not None: if re.search('EroConfirmText">', webpage) is not None:
self.report_age_confirmation() self.report_age_confirmation()
confirm_string = self._html_search_regex( confirm_string = self._html_search_regex(
r'<a href="/video/view/\?id=%s&confirm=([^"]+)" title="[^"]+">' % video_id, r'<a href="/video/view/\?id=%s&confirm=([^"]+)" title="[^"]+">' % video_id,
video_page, 'confirm string') webpage, 'confirm string')
confirm_url = video_page_url + '&confirm=%s' % confirm_string confirm_url = webpage_url + '&confirm=%s' % confirm_string
video_page = self._download_webpage(confirm_url, video_id, 'Downloading video page (age confirmed)') webpage = self._download_webpage(confirm_url, video_id, 'Downloading video page (age confirmed)')
adult_content = True adult_content = True
else: else:
adult_content = False adult_content = False
# Extract the rest of meta data view_count = self._html_search_regex(
video_title = self._search_meta('name', video_page, 'title')
if not video_title:
video_title = os.path.splitext(url_basename(video_url))[0]
video_description = self._search_meta('description', video_page)
END_TEXT = ' на сайте Smotri.com'
if video_description and video_description.endswith(END_TEXT):
video_description = video_description[:-len(END_TEXT)]
START_TEXT = 'Смотреть онлайн ролик '
if video_description and video_description.startswith(START_TEXT):
video_description = video_description[len(START_TEXT):]
video_thumbnail = self._search_meta('thumbnail', video_page)
upload_date_str = self._search_meta('uploadDate', video_page, 'upload date')
if upload_date_str:
upload_date_m = re.search(r'(?P<year>\d{4})\.(?P<month>\d{2})\.(?P<day>\d{2})T', upload_date_str)
video_upload_date = (
(
upload_date_m.group('year') +
upload_date_m.group('month') +
upload_date_m.group('day')
)
if upload_date_m else None
)
else:
video_upload_date = None
duration_str = self._search_meta('duration', video_page)
if duration_str:
duration_m = re.search(r'T(?P<hours>[0-9]{2})H(?P<minutes>[0-9]{2})M(?P<seconds>[0-9]{2})S', duration_str)
video_duration = (
(
(int(duration_m.group('hours')) * 60 * 60) +
(int(duration_m.group('minutes')) * 60) +
int(duration_m.group('seconds'))
)
if duration_m else None
)
else:
video_duration = None
video_uploader = self._html_search_regex(
'<div class="DescrUser"><div>Автор.*?onmouseover="popup_user_info[^"]+">(.*?)</a>',
video_page, 'uploader', fatal=False, flags=re.MULTILINE|re.DOTALL)
video_uploader_id = self._html_search_regex(
'<div class="DescrUser"><div>Автор.*?onmouseover="popup_user_info\\(.*?\'([^\']+)\'\\);">',
video_page, 'uploader id', fatal=False, flags=re.MULTILINE|re.DOTALL)
video_view_count = self._html_search_regex(
'Общее количество просмотров.*?<span class="Number">(\\d+)</span>', 'Общее количество просмотров.*?<span class="Number">(\\d+)</span>',
video_page, 'view count', fatal=False, flags=re.MULTILINE|re.DOTALL) webpage, 'view count', fatal=False, flags=re.MULTILINE | re.DOTALL)
return { return {
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'title': video_title, 'title': title,
'thumbnail': video_thumbnail, 'thumbnail': thumbnail,
'description': video_description, 'uploader': uploader,
'uploader': video_uploader, 'upload_date': upload_date,
'upload_date': video_upload_date, 'uploader_id': uploader_id,
'uploader_id': video_uploader_id, 'duration': duration,
'duration': video_duration, 'view_count': int_or_none(view_count),
'view_count': int_or_none(video_view_count),
'age_limit': 18 if adult_content else 0, 'age_limit': 18 if adult_content else 0,
'video_page_url': video_page_url
} }

View File

@@ -1,4 +1,5 @@
# encoding: utf-8 # encoding: utf-8
from __future__ import unicode_literals
import json import json
import re import re
@@ -11,13 +12,14 @@ class SohuIE(InfoExtractor):
_VALID_URL = r'https?://(?P<mytv>my\.)?tv\.sohu\.com/.+?/(?(mytv)|n)(?P<id>\d+)\.shtml.*?' _VALID_URL = r'https?://(?P<mytv>my\.)?tv\.sohu\.com/.+?/(?(mytv)|n)(?P<id>\d+)\.shtml.*?'
_TEST = { _TEST = {
u'url': u'http://tv.sohu.com/20130724/n382479172.shtml#super', 'url': 'http://tv.sohu.com/20130724/n382479172.shtml#super',
u'file': u'382479172.mp4', 'md5': 'bde8d9a6ffd82c63a1eefaef4eeefec7',
u'md5': u'bde8d9a6ffd82c63a1eefaef4eeefec7', 'info_dict': {
u'info_dict': { 'id': '382479172',
u'title': u'MVFar East Movement《The Illest》', 'ext': 'mp4',
'title': 'MVFar East Movement《The Illest》',
}, },
u'skip': u'Only available from China', 'skip': 'Only available from China',
} }
def _real_extract(self, url): def _real_extract(self, url):
@@ -26,11 +28,11 @@ class SohuIE(InfoExtractor):
if mytv: if mytv:
base_data_url = 'http://my.tv.sohu.com/play/videonew.do?vid=' base_data_url = 'http://my.tv.sohu.com/play/videonew.do?vid='
else: else:
base_data_url = u'http://hot.vrs.sohu.com/vrs_flash.action?vid=' base_data_url = 'http://hot.vrs.sohu.com/vrs_flash.action?vid='
data_url = base_data_url + str(vid_id) data_url = base_data_url + str(vid_id)
data_json = self._download_webpage( data_json = self._download_webpage(
data_url, video_id, data_url, video_id,
note=u'Downloading JSON data for ' + str(vid_id)) note='Downloading JSON data for ' + str(vid_id))
return json.loads(data_json) return json.loads(data_json)
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
@@ -39,11 +41,11 @@ class SohuIE(InfoExtractor):
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
raw_title = self._html_search_regex(r'(?s)<title>(.+?)</title>', raw_title = self._html_search_regex(r'(?s)<title>(.+?)</title>',
webpage, u'video title') webpage, 'video title')
title = raw_title.partition('-')[0].strip() title = raw_title.partition('-')[0].strip()
vid = self._html_search_regex(r'var vid ?= ?["\'](\d+)["\']', webpage, vid = self._html_search_regex(r'var vid ?= ?["\'](\d+)["\']', webpage,
u'video path') 'video path')
data = _fetch_data(vid, mytv) data = _fetch_data(vid, mytv)
QUALITIES = ('ori', 'super', 'high', 'nor') QUALITIES = ('ori', 'super', 'high', 'nor')
@@ -51,7 +53,7 @@ class SohuIE(InfoExtractor):
for q in QUALITIES for q in QUALITIES
if data['data'][q + 'Vid'] != 0] if data['data'][q + 'Vid'] != 0]
if not vid_ids: if not vid_ids:
raise ExtractorError(u'No formats available for this video') raise ExtractorError('No formats available for this video')
# For now, we just pick the highest available quality # For now, we just pick the highest available quality
vid_id = vid_ids[-1] vid_id = vid_ids[-1]
@@ -69,7 +71,7 @@ class SohuIE(InfoExtractor):
(allot, prot, clipsURL[i], su[i])) (allot, prot, clipsURL[i], su[i]))
part_str = self._download_webpage( part_str = self._download_webpage(
part_url, video_id, part_url, video_id,
note=u'Downloading part %d of %d' % (i+1, part_count)) note='Downloading part %d of %d' % (i + 1, part_count))
part_info = part_str.split('|') part_info = part_str.split('|')
video_url = '%s%s?key=%s' % (part_info[0], su[i], part_info[3]) video_url = '%s%s?key=%s' % (part_info[0], su[i], part_info[3])

View File

@@ -33,5 +33,6 @@ class SpaceIE(InfoExtractor):
# Other videos works fine with the info from the object # Other videos works fine with the info from the object
brightcove_url = BrightcoveIE._extract_brightcove_url(webpage) brightcove_url = BrightcoveIE._extract_brightcove_url(webpage)
if brightcove_url is None: if brightcove_url is None:
raise ExtractorError(u'The webpage does not contain a video', expected=True) raise ExtractorError(
'The webpage does not contain a video', expected=True)
return self.url_result(brightcove_url, BrightcoveIE.ie_key()) return self.url_result(brightcove_url, BrightcoveIE.ie_key())

View File

@@ -93,4 +93,3 @@ class SportDeutschlandIE(InfoExtractor):
'rtmp_live': asset.get('live'), 'rtmp_live': asset.get('live'),
'timestamp': parse_iso8601(asset.get('date')), 'timestamp': parse_iso8601(asset.get('date')),
} }

View File

@@ -1,7 +1,8 @@
from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
compat_str,
ExtractorError, ExtractorError,
) )
@@ -17,10 +18,10 @@ class SubtitlesInfoExtractor(InfoExtractor):
sub_lang_list = self._get_available_subtitles(video_id, webpage) sub_lang_list = self._get_available_subtitles(video_id, webpage)
auto_captions_list = self._get_available_automatic_caption(video_id, webpage) auto_captions_list = self._get_available_automatic_caption(video_id, webpage)
sub_lang = ",".join(list(sub_lang_list.keys())) sub_lang = ",".join(list(sub_lang_list.keys()))
self.to_screen(u'%s: Available subtitles for video: %s' % self.to_screen('%s: Available subtitles for video: %s' %
(video_id, sub_lang)) (video_id, sub_lang))
auto_lang = ",".join(auto_captions_list.keys()) auto_lang = ",".join(auto_captions_list.keys())
self.to_screen(u'%s: Available automatic captions for video: %s' % self.to_screen('%s: Available automatic captions for video: %s' %
(video_id, auto_lang)) (video_id, auto_lang))
def extract_subtitles(self, video_id, webpage): def extract_subtitles(self, video_id, webpage):
@@ -50,8 +51,8 @@ class SubtitlesInfoExtractor(InfoExtractor):
sub_lang_list = {} sub_lang_list = {}
for sub_lang in requested_langs: for sub_lang in requested_langs:
if not sub_lang in available_subs_list: if sub_lang not in available_subs_list:
self._downloader.report_warning(u'no closed captions found in the specified language "%s"' % sub_lang) self._downloader.report_warning('no closed captions found in the specified language "%s"' % sub_lang)
continue continue
sub_lang_list[sub_lang] = available_subs_list[sub_lang] sub_lang_list[sub_lang] = available_subs_list[sub_lang]
@@ -70,10 +71,10 @@ class SubtitlesInfoExtractor(InfoExtractor):
try: try:
sub = self._download_subtitle_url(sub_lang, url) sub = self._download_subtitle_url(sub_lang, url)
except ExtractorError as err: except ExtractorError as err:
self._downloader.report_warning(u'unable to download video subtitles for %s: %s' % (sub_lang, compat_str(err))) self._downloader.report_warning('unable to download video subtitles for %s: %s' % (sub_lang, compat_str(err)))
return return
if not sub: if not sub:
self._downloader.report_warning(u'Did not fetch video subtitles') self._downloader.report_warning('Did not fetch video subtitles')
return return
return sub return sub
@@ -94,5 +95,5 @@ class SubtitlesInfoExtractor(InfoExtractor):
Must be redefined by the subclasses that support automatic captions, Must be redefined by the subclasses that support automatic captions,
otherwise it will return {} otherwise it will return {}
""" """
self._downloader.report_warning(u'Automatic Captions not supported by this server') self._downloader.report_warning('Automatic Captions not supported by this server')
return {} return {}

View File

@@ -0,0 +1,62 @@
# encoding: utf-8
from __future__ import unicode_literals
import json
from .common import InfoExtractor
from ..utils import (
js_to_json,
qualities,
)
class TassIE(InfoExtractor):
_VALID_URL = r'https?://(?:tass\.ru|itar-tass\.com)/[^/]+/(?P<id>\d+)'
_TESTS = [
{
'url': 'http://tass.ru/obschestvo/1586870',
'md5': '3b4cdd011bc59174596b6145cda474a4',
'info_dict': {
'id': '1586870',
'ext': 'mp4',
'title': 'Посетителям московского зоопарка показали красную панду',
'description': 'Приехавшую из Дублина Зейну можно увидеть в павильоне "Кошки тропиков"',
'thumbnail': 're:^https?://.*\.jpg$',
},
},
{
'url': 'http://itar-tass.com/obschestvo/1600009',
'only_matching': True,
},
]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
sources = json.loads(js_to_json(self._search_regex(
r'(?s)sources\s*:\s*(\[.+?\])', webpage, 'sources')))
quality = qualities(['sd', 'hd'])
formats = []
for source in sources:
video_url = source.get('file')
if not video_url or not video_url.startswith('http') or not video_url.endswith('.mp4'):
continue
label = source.get('label')
formats.append({
'url': video_url,
'format_id': label,
'quality': quality(label),
})
self._sort_formats(formats)
return {
'id': video_id,
'title': self._og_search_title(webpage),
'description': self._og_search_description(webpage),
'thumbnail': self._og_search_thumbnail(webpage),
'formats': formats,
}

View File

@@ -16,8 +16,7 @@ class TeamcocoIE(InfoExtractor):
'title': 'Conan Becomes A Mary Kay Beauty Consultant', 'title': 'Conan Becomes A Mary Kay Beauty Consultant',
'description': 'Mary Kay is perhaps the most trusted name in female beauty, so of course Conan is a natural choice to sell their products.' 'description': 'Mary Kay is perhaps the most trusted name in female beauty, so of course Conan is a natural choice to sell their products.'
} }
}, }, {
{
'url': 'http://teamcoco.com/video/louis-ck-interview-george-w-bush', 'url': 'http://teamcoco.com/video/louis-ck-interview-george-w-bush',
'file': '19705.mp4', 'file': '19705.mp4',
'md5': 'cde9ba0fa3506f5f017ce11ead928f9a', 'md5': 'cde9ba0fa3506f5f017ce11ead928f9a',

View File

@@ -35,6 +35,7 @@ class ThePlatformIE(InfoExtractor):
'skip_download': True, 'skip_download': True,
}, },
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id') video_id = mobj.group('id')
@@ -48,7 +49,6 @@ class ThePlatformIE(InfoExtractor):
smil_url = ('http://link.theplatform.com/s/dJ5BDC/{0}/meta.smil?' smil_url = ('http://link.theplatform.com/s/dJ5BDC/{0}/meta.smil?'
'format=smil&mbr=true'.format(video_id)) 'format=smil&mbr=true'.format(video_id))
meta = self._download_xml(smil_url, video_id) meta = self._download_xml(smil_url, video_id)
try: try:
error_msg = next( error_msg = next(

View File

@@ -36,9 +36,10 @@ class TlcDeIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Breaking Amish: Die Welt da draußen', 'title': 'Breaking Amish: Die Welt da draußen',
'uploader': 'Discovery Networks - Germany', 'uploader': 'Discovery Networks - Germany',
'description': 'Vier Amische und eine Mennonitin wagen in New York' 'description': (
'Vier Amische und eine Mennonitin wagen in New York'
' den Sprung in ein komplett anderes Leben. Begleitet sie auf' ' den Sprung in ein komplett anderes Leben. Begleitet sie auf'
' ihrem spannenden Weg.', ' ihrem spannenden Weg.'),
}, },
} }

View File

@@ -0,0 +1,32 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
class TMZIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?tmz\.com/videos/(?P<id>[^/]+)/?'
_TEST = {
'url': 'http://www.tmz.com/videos/0_okj015ty/',
'md5': '791204e3bf790b1426cb2db0706184c0',
'info_dict': {
'id': '0_okj015ty',
'url': 'http://tmz.vo.llnwd.net/o28/2014-03/13/0_okj015ty_0_rt8ro3si_2.mp4',
'ext': 'mp4',
'title': 'Kim Kardashian\'s Boobs Unlock a Mystery!',
'description': 'Did Kim Kardasain try to one-up Khloe by one-upping Kylie??? Or is she just showing off her amazing boobs?',
'thumbnail': 'http://cdnbakmi.kaltura.com/p/591531/sp/59153100/thumbnail/entry_id/0_okj015ty/version/100002/acv/182/width/640',
}
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
return {
'id': video_id,
'url': self._html_search_meta('VideoURL', webpage, fatal=True),
'title': self._og_search_title(webpage),
'description': self._og_search_description(webpage),
'thumbnail': self._html_search_meta('ThumbURL', webpage),
}

View File

@@ -1,28 +1,28 @@
from __future__ import unicode_literals
import json import json
import re
from .common import InfoExtractor from .common import InfoExtractor
class TriluliluIE(InfoExtractor): class TriluliluIE(InfoExtractor):
_VALID_URL = r'(?x)(?:https?://)?(?:www\.)?trilulilu\.ro/video-(?P<category>[^/]+)/(?P<video_id>[^/]+)' _VALID_URL = r'https?://(?:www\.)?trilulilu\.ro/video-[^/]+/(?P<id>[^/]+)'
_TEST = { _TEST = {
u"url": u"http://www.trilulilu.ro/video-animatie/big-buck-bunny-1", 'url': 'http://www.trilulilu.ro/video-animatie/big-buck-bunny-1',
u'file': u"big-buck-bunny-1.mp4", 'info_dict': {
u'info_dict': { 'id': 'big-buck-bunny-1',
u"title": u"Big Buck Bunny", 'ext': 'mp4',
u"description": u":) pentru copilul din noi", 'title': 'Big Buck Bunny',
'description': ':) pentru copilul din noi',
}, },
# Server ignores Range headers (--test) # Server ignores Range headers (--test)
u"params": { 'params': {
u"skip_download": True 'skip_download': True
} }
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('video_id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
title = self._og_search_title(webpage) title = self._og_search_title(webpage)
@@ -30,20 +30,20 @@ class TriluliluIE(InfoExtractor):
description = self._og_search_description(webpage) description = self._og_search_description(webpage)
log_str = self._search_regex( log_str = self._search_regex(
r'block_flash_vars[ ]=[ ]({[^}]+})', webpage, u'log info') r'block_flash_vars[ ]=[ ]({[^}]+})', webpage, 'log info')
log = json.loads(log_str) log = json.loads(log_str)
format_url = (u'http://fs%(server)s.trilulilu.ro/%(hash)s/' format_url = ('http://fs%(server)s.trilulilu.ro/%(hash)s/'
u'video-formats2' % log) 'video-formats2' % log)
format_doc = self._download_xml( format_doc = self._download_xml(
format_url, video_id, format_url, video_id,
note=u'Downloading formats', note='Downloading formats',
errnote=u'Error while downloading formats') errnote='Error while downloading formats')
video_url_template = ( video_url_template = (
u'http://fs%(server)s.trilulilu.ro/stream.php?type=video' 'http://fs%(server)s.trilulilu.ro/stream.php?type=video'
u'&source=site&hash=%(hash)s&username=%(userid)s&' '&source=site&hash=%(hash)s&username=%(userid)s&'
u'key=ministhebest&format=%%s&sig=&exp=' % 'key=ministhebest&format=%%s&sig=&exp=' %
log) log)
formats = [ formats = [
{ {
@@ -63,4 +63,3 @@ class TriluliluIE(InfoExtractor):
'description': description, 'description': description,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
} }

View File

@@ -73,7 +73,7 @@ class TudouIE(InfoExtractor):
result = [] result = []
len_parts = len(parts) len_parts = len(parts)
if len_parts > 1: if len_parts > 1:
self.to_screen(u'%s: found %s parts' % (video_id, len_parts)) self.to_screen('%s: found %s parts' % (video_id, len_parts))
for part in parts: for part in parts:
part_id = part['k'] part_id = part['k']
final_url = self._url_for_id(part_id, quality) final_url = self._url_for_id(part_id, quality)

Some files were not shown because too many files have changed in this diff Show More