Compare commits

...

78 Commits

Author SHA1 Message Date
Philipp Hagemeister
7c8ea53b96 release 2014.11.26.1 2014-11-26 22:01:06 +01:00
Philipp Hagemeister
dcddc10a50 [test_unicode_literals] Arm unicode_literals check
From now on, the line

from __future__ import unicode_literals

should be contained in every single Python file lest we run into any more 2.x/3.x issues.
Going forward, we're likely to develop on 3.x only and would likely miss subtle bugs otherwise.
2014-11-26 20:01:22 +01:00
Sergey M․
a1008af412 [gorillavid] Update IE_DESC 2014-11-27 00:24:19 +06:00
Sergey M․
61c0663c1e [udemy] Generalize download json and fix login 2014-11-26 21:25:43 +06:00
Sergey M․
81a7a521c5 [gorillavid] Remove unused import 2014-11-26 21:02:46 +06:00
Sergey M․
e293711802 [udemy] Set session cookies to API requests (Closes #4124, closes #4219, closes #4308) 2014-11-26 21:00:18 +06:00
Sergey M․
ceb3367320 [gorillavid] Generalize extraction with countdown timeout and support faststream.in (Closes #4297) 2014-11-26 20:02:40 +06:00
Philipp Hagemeister
a03aaaed2e Declare Python 3.2 compatibility 2014-11-26 13:08:42 +01:00
Philipp Hagemeister
e075a44afb [tests] Remove useless u prefixes 2014-11-26 13:07:32 +01:00
Philipp Hagemeister
8865bdeb37 Remove useless u prefixes 2014-11-26 13:06:02 +01:00
Philipp Hagemeister
3aa578cad2 [ffmpeg] Modernize 2014-11-26 13:05:49 +01:00
Philipp Hagemeister
d3b5101a91 [videopremium] Modernize 2014-11-26 13:03:22 +01:00
Philipp Hagemeister
5c32110114 [videofyme] Modernize 2014-11-26 13:01:39 +01:00
Philipp Hagemeister
24144e3b8d [tvp] Modernize 2014-11-26 12:58:53 +01:00
Philipp Hagemeister
b3034f9df7 [trilulilu] Modernize 2014-11-26 12:56:43 +01:00
Philipp Hagemeister
4c6d2ff8dc [sohu] Modernize 2014-11-26 12:53:55 +01:00
Philipp Hagemeister
faf3494894 [redtube] Modernize 2014-11-26 12:52:45 +01:00
Philipp Hagemeister
535a66ef66 [muzu] Modernize 2014-11-26 12:50:37 +01:00
Philipp Hagemeister
5c40bba82f [hotnewhiphop] Modernize 2014-11-26 12:45:40 +01:00
Philipp Hagemeister
855dc479c2 [subtitles] Modernize 2014-11-26 12:43:06 +01:00
Philipp Hagemeister
0792d5634e [youtube] Remove useless u prefixes 2014-11-26 12:41:53 +01:00
Philipp Hagemeister
e91cdcae1a [appletrailers] Modernize 2014-11-26 12:41:24 +01:00
Philipp Hagemeister
27e1400f55 [aparat] Modernize 2014-11-26 12:40:51 +01:00
Philipp Hagemeister
e0938e7731 [addanime] Modernize 2014-11-26 12:40:05 +01:00
Philipp Hagemeister
b72823a0a4 [francetv] PEP8 2014-11-26 12:38:20 +01:00
Philipp Hagemeister
673cf0e773 [update] Remove useless import 2014-11-26 12:37:45 +01:00
Philipp Hagemeister
f8aace93cd [academicearth] Modernize 2014-11-26 12:35:57 +01:00
Philipp Hagemeister
80310134e0 [mplayer] Modernize 2014-11-26 12:34:52 +01:00
Philipp Hagemeister
4d2d638df4 [http] Modernize 2014-11-26 12:27:36 +01:00
Philipp Hagemeister
0e44f90e18 [hls] Remove useless u porefixes 2014-11-26 12:26:21 +01:00
Philipp Hagemeister
15938ab67a [update] Modernize 2014-11-26 12:24:57 +01:00
Philipp Hagemeister
ab4ee31eb1 [utils] remove useless u prefix 2014-11-26 11:50:22 +01:00
Philipp Hagemeister
b061ea6e9f [compat] Beautify assertion 2014-11-26 11:48:09 +01:00
Philipp Hagemeister
4aae94f9d0 [YoutubeDL] Remove incorrect documentation 2014-11-26 11:25:43 +01:00
Philipp Hagemeister
acda92f6bc Clarify --no-playlist documentation (Closes #4309) 2014-11-26 10:51:03 +01:00
Philipp Hagemeister
ddfd0f2727 release 2014.11.26 2014-11-26 10:46:12 +01:00
Philipp Hagemeister
d0720e7118 Merge branch 'master' of github.com:rg3/youtube-dl 2014-11-26 10:45:57 +01:00
Philipp Hagemeister
4e262a8838 [generic] Detect direct video links (Fixes #4149, #4313) 2014-11-26 10:44:39 +01:00
Sergey M․
b9ed3af343 [tass] Add extractor (Closes #4296) 2014-11-25 22:24:33 +06:00
Philipp Hagemeister
63c9b2c1d9 release 2014.11.25.1 2014-11-25 14:34:29 +01:00
Philipp Hagemeister
65f3a228b1 [generic] Add support for LazyYT embeds (Fixes #4306) 2014-11-25 14:34:19 +01:00
Philipp Hagemeister
3004ae2c3a Credit @t0mm0 for xminus (#4302) 2014-11-25 12:16:48 +01:00
Philipp Hagemeister
d9836a5917 release 2014.11.25 2014-11-25 09:56:52 +01:00
Philipp Hagemeister
be64b5b098 [xminus] Simplify and extend (#4302) 2014-11-25 09:54:54 +01:00
Philipp Hagemeister
c3e74731c2 [README] Mention _og_search_description (#4304)
Lots of sites do have this meta tag, so just add it to the example.
2014-11-25 09:36:27 +01:00
Philipp Hagemeister
c920d7f00d [README] Adapt code to new style
Next to every IE will download the webpage first anyways.
2014-11-25 09:23:46 +01:00
Philipp Hagemeister
0bbf12239c Merge remote-tracking branch 't0mm0/x-minus' 2014-11-25 09:22:33 +01:00
Philipp Hagemeister
70d68eb46f Credit @MatthewRayfield for tmz (#4304) 2014-11-25 09:17:59 +01:00
Philipp Hagemeister
c553fe5d29 [tmz] Simplify (#4304) 2014-11-25 09:16:40 +01:00
Matthew Rayfield
f0c3d729d7 [tmz] Add new extractor 2014-11-25 02:54:13 -05:00
t0mm0
1cdedfee10 [XMinus] Added new extractor. 2014-11-25 03:25:28 +00:00
Philipp Hagemeister
93129d9442 release 2014.11.24 2014-11-24 22:56:43 +01:00
Philipp Hagemeister
e8c8653e9d Merge remote-tracking branch 'origin/master' 2014-11-24 22:52:04 +01:00
Philipp Hagemeister
fab89c67c5 Credit @ossi96 for bpb (#4298) 2014-11-24 22:47:49 +01:00
Philipp Hagemeister
3d960a22fa [bpb] Simplify (#4298) 2014-11-24 22:47:23 +01:00
Philipp Hagemeister
51bbb084d3 Merge remote-tracking branch 'ossi96/bpb' 2014-11-24 22:42:56 +01:00
Naglis Jonaitis
2c25a2bd29 [tunein] Add new extractor (Closes #4097) 2014-11-24 23:15:33 +02:00
Oskar Jauch
355682be01 bpb Add new extractor 2014-11-24 20:02:00 +01:00
Jaime Marquínez Ferrándiz
00e9d396ab [francetv] Use the m3u8 manifest for georestricted videos (closes #3963)
Generating the correct urls for the f4m segments seems to require a lot of work.
Also raise an error if the video is not available from your location.
2014-11-24 19:49:43 +01:00
Philipp Hagemeister
14d4e90eb1 [downloader/__init__] Define proper __all__ 2014-11-23 22:25:12 +01:00
Philipp Hagemeister
b74e86f48a Fix all PEP8 issues except E501 2014-11-23 22:21:46 +01:00
Philipp Hagemeister
3d36cea4ac [vk] PEP8 2014-11-23 22:14:27 +01:00
Philipp Hagemeister
380b822003 Remove outdated transition helper scripts 2014-11-23 22:13:03 +01:00
Philipp Hagemeister
b66e699877 [myspace] pep8 and modernization 2014-11-23 22:12:18 +01:00
Philipp Hagemeister
27f8b0994e Merge remote-tracking branch 'jtwaleson/master' 2014-11-23 22:10:26 +01:00
Philipp Hagemeister
e311b6389a Credit @daohoangson for zingmp3 (#4288) 2014-11-23 22:01:15 +01:00
Jouke Waleson
fab6d4c048 remove useless line, the result is never used 2014-11-23 22:00:35 +01:00
Philipp Hagemeister
4ffc31033e [zingmp3] Simplify and PEP8 (#4288) 2014-11-23 22:00:25 +01:00
Philipp Hagemeister
c1777d5cb3 Merge remote-tracking branch 'daohoangson/zing-mp3' 2014-11-23 21:55:51 +01:00
Jouke Waleson
9e1a5b8455 PEP8: applied even more rules 2014-11-23 21:39:15 +01:00
Philipp Hagemeister
784b6d3a9b Merge remote-tracking branch 'jtwaleson/master' 2014-11-23 21:33:31 +01:00
Dao Hoang Son
c66bdc4869 [zingmp3] Added support for songs and albums 2014-11-24 03:25:47 +07:00
Jouke Waleson
2514d2635e PEP8: E225,E227 2014-11-23 21:23:05 +01:00
Jouke Waleson
8bcc875676 PEP8: more applied 2014-11-23 21:20:46 +01:00
Jouke Waleson
5f6a1245ff PEP8 applied 2014-11-23 20:41:03 +01:00
Philipp Hagemeister
f3a3407226 [youtube] Clarify keywords 2014-11-23 20:09:10 +01:00
Sergey M․
598c218f7b [smotri] Adapt to new API and modernize 2014-11-23 23:53:41 +06:00
Naglis Jonaitis
4698b14b76 [rtlxl] Strip additional dot from video URL (#4115) 2014-11-23 13:28:09 +02:00
233 changed files with 1785 additions and 1295 deletions

View File

@@ -84,3 +84,7 @@ xantares
Jan Matějka Jan Matějka
Mauroy Sébastien Mauroy Sébastien
William Sewell William Sewell
Dao Hoang Son
Oskar Jauch
Matthew Rayfield
t0mm0

View File

@@ -30,7 +30,7 @@ Alternatively, refer to the developer instructions below for how to check out an
# DESCRIPTION # DESCRIPTION
**youtube-dl** is a small command-line program to download videos from **youtube-dl** is a small command-line program to download videos from
YouTube.com and a few more sites. It requires the Python interpreter, version YouTube.com and a few more sites. It requires the Python interpreter, version
2.6, 2.7, or 3.3+, and it is not platform specific. It should work on 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on
your Unix box, on Windows or on Mac OS X. It is released to the public domain, your Unix box, on Windows or on Mac OS X. It is released to the public domain,
which means you can modify it, redistribute it or use it however you like. which means you can modify it, redistribute it or use it however you like.
@@ -93,7 +93,8 @@ which means you can modify it, redistribute it or use it however you like.
COUNT views COUNT views
--max-views COUNT Do not download any videos with more than --max-views COUNT Do not download any videos with more than
COUNT views COUNT views
--no-playlist download only the currently playing video --no-playlist If the URL refers to a video and a
playlist, download only the video.
--age-limit YEARS download only videos suitable for the given --age-limit YEARS download only videos suitable for the given
age age
--download-archive FILE Download only videos not listed in the --download-archive FILE Download only videos not listed in the
@@ -492,14 +493,15 @@ If you want to add support for a new site, you can follow this quick list (assum
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
# TODO more code goes here, for example ... # TODO more code goes here, for example ...
webpage = self._download_webpage(url, video_id)
title = self._html_search_regex(r'<h1>(.*?)</h1>', webpage, 'title') title = self._html_search_regex(r'<h1>(.*?)</h1>', webpage, 'title')
return { return {
'id': video_id, 'id': video_id,
'title': title, 'title': title,
'description': self._og_search_description(webpage),
# TODO more properties (see youtube_dl/extractor/common.py) # TODO more properties (see youtube_dl/extractor/common.py)
} }
``` ```

View File

@@ -1,4 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
import os import os
from os.path import dirname as dirn from os.path import dirname as dirn
import sys import sys
@@ -9,16 +11,17 @@ import youtube_dl
BASH_COMPLETION_FILE = "youtube-dl.bash-completion" BASH_COMPLETION_FILE = "youtube-dl.bash-completion"
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in" BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
def build_completion(opt_parser): def build_completion(opt_parser):
opts_flag = [] opts_flag = []
for group in opt_parser.option_groups: for group in opt_parser.option_groups:
for option in group.option_list: for option in group.option_list:
#for every long flag # for every long flag
opts_flag.append(option.get_opt_string()) opts_flag.append(option.get_opt_string())
with open(BASH_COMPLETION_TEMPLATE) as f: with open(BASH_COMPLETION_TEMPLATE) as f:
template = f.read() template = f.read()
with open(BASH_COMPLETION_FILE, "w") as f: with open(BASH_COMPLETION_FILE, "w") as f:
#just using the special char # just using the special char
filled_template = template.replace("{{flags}}", " ".join(opts_flag)) filled_template = template.replace("{{flags}}", " ".join(opts_flag))
f.write(filled_template) f.write(filled_template)

View File

@@ -142,7 +142,7 @@ def win_service_set_status(handle, status_code):
def win_service_main(service_name, real_main, argc, argv_raw): def win_service_main(service_name, real_main, argc, argv_raw):
try: try:
#args = [argv_raw[i].value for i in range(argc)] # args = [argv_raw[i].value for i in range(argc)]
stop_event = threading.Event() stop_event = threading.Event()
handler = HandlerEx(functools.partial(stop_event, win_service_handler)) handler = HandlerEx(functools.partial(stop_event, win_service_handler))
h = advapi32.RegisterServiceCtrlHandlerExW(service_name, handler, None) h = advapi32.RegisterServiceCtrlHandlerExW(service_name, handler, None)
@@ -233,6 +233,7 @@ def rmtree(path):
#============================================================================== #==============================================================================
class BuildError(Exception): class BuildError(Exception):
def __init__(self, output, code=500): def __init__(self, output, code=500):
self.output = output self.output = output
@@ -369,7 +370,7 @@ class Builder(PythonBuilder, GITBuilder, YoutubeDLBuilder, DownloadBuilder, Clea
class BuildHTTPRequestHandler(BaseHTTPRequestHandler): class BuildHTTPRequestHandler(BaseHTTPRequestHandler):
actionDict = { 'build': Builder, 'download': Builder } # They're the same, no more caching. actionDict = {'build': Builder, 'download': Builder} # They're the same, no more caching.
def do_GET(self): def do_GET(self):
path = urlparse.urlparse(self.path) path = urlparse.urlparse(self.path)

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
""" """
This script employs a VERY basic heuristic ('porn' in webpage.lower()) to check This script employs a VERY basic heuristic ('porn' in webpage.lower()) to check

View File

@@ -23,13 +23,13 @@ EXTRA_ARGS = {
'batch-file': ['--require-parameter'], 'batch-file': ['--require-parameter'],
} }
def build_completion(opt_parser): def build_completion(opt_parser):
commands = [] commands = []
for group in opt_parser.option_groups: for group in opt_parser.option_groups:
for option in group.option_list: for option in group.option_list:
long_option = option.get_opt_string().strip('-') long_option = option.get_opt_string().strip('-')
help_msg = shell_quote([option.help])
complete_cmd = ['complete', '--command', 'youtube-dl', '--long-option', long_option] complete_cmd = ['complete', '--command', 'youtube-dl', '--long-option', long_option]
if option._short_opts: if option._short_opts:
complete_cmd += ['--short-option', option._short_opts[0].strip('-')] complete_cmd += ['--short-option', option._short_opts[0].strip('-')]

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
import json import json
import sys import sys

View File

@@ -1,8 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
import hashlib import hashlib
import shutil
import subprocess
import tempfile
import urllib.request import urllib.request
import json import json

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals, with_statement
import rsa import rsa
import json import json
@@ -29,4 +30,5 @@ signature = hexlify(rsa.pkcs1.sign(json.dumps(versions_info, sort_keys=True).enc
print('signature: ' + signature) print('signature: ' + signature)
versions_info['signature'] = signature versions_info['signature'] = signature
json.dump(versions_info, open('update/versions.json', 'w'), indent=4, sort_keys=True) with open('update/versions.json', 'w') as versionsf:
json.dump(versions_info, versionsf, indent=4, sort_keys=True)

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# coding: utf-8 # coding: utf-8
from __future__ import with_statement from __future__ import with_statement, unicode_literals
import datetime import datetime
import glob import glob
@@ -13,7 +13,7 @@ year = str(datetime.datetime.now().year)
for fn in glob.glob('*.html*'): for fn in glob.glob('*.html*'):
with io.open(fn, encoding='utf-8') as f: with io.open(fn, encoding='utf-8') as f:
content = f.read() content = f.read()
newc = re.sub(u'(?P<copyright>Copyright © 2006-)(?P<year>[0-9]{4})', u'Copyright © 2006-' + year, content) newc = re.sub(r'(?P<copyright>Copyright © 2006-)(?P<year>[0-9]{4})', 'Copyright © 2006-' + year, content)
if content != newc: if content != newc:
tmpFn = fn + '.part' tmpFn = fn + '.part'
with io.open(tmpFn, 'wt', encoding='utf-8') as outf: with io.open(tmpFn, 'wt', encoding='utf-8') as outf:

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
import datetime import datetime
import io import io
@@ -73,4 +74,3 @@ atom_template = atom_template.replace('@ENTRIES@', entries_str)
with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file: with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file:
atom_file.write(atom_template) atom_file.write(atom_template)

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
import sys import sys
import os import os
@@ -9,6 +10,7 @@ sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(
import youtube_dl import youtube_dl
def main(): def main():
with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf: with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf:
template = tmplf.read() template = tmplf.read()
@@ -21,7 +23,7 @@ def main():
continue continue
elif ie_desc is not None: elif ie_desc is not None:
ie_html += ': {}'.format(ie.IE_DESC) ie_html += ': {}'.format(ie.IE_DESC)
if ie.working() == False: if not ie.working():
ie_html += ' (Currently broken)' ie_html += ' (Currently broken)'
ie_htmls.append('<li>{}</li>'.format(ie_html)) ie_htmls.append('<li>{}</li>'.format(ie_html))

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
import io import io
import sys import sys
import re import re

View File

@@ -1,3 +1,4 @@
from __future__ import unicode_literals
import io import io
import os.path import os.path

View File

@@ -1,40 +0,0 @@
#!/usr/bin/env python
import sys, os
try:
import urllib.request as compat_urllib_request
except ImportError: # Python 2
import urllib2 as compat_urllib_request
sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n')
try:
raw_input()
except NameError: # Python 3
input()
filename = sys.argv[0]
API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl"
if not os.access(filename, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % filename)
try:
urlh = compat_urllib_request.urlopen(BIN_URL)
newcontent = urlh.read()
urlh.close()
except (IOError, OSError) as err:
sys.exit('ERROR: unable to download latest version')
try:
with open(filename, 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError) as err:
sys.exit('ERROR: unable to overwrite current version')
sys.stderr.write(u'Done! Now you can run youtube-dl.\n')

View File

@@ -1,12 +0,0 @@
from distutils.core import setup
import py2exe
py2exe_options = {
"bundle_files": 1,
"compressed": 1,
"optimize": 2,
"dist_dir": '.',
"dll_excludes": ['w9xpopen.exe']
}
setup(console=['youtube-dl.py'], options={ "py2exe": py2exe_options }, zipfile=None)

View File

@@ -1,102 +0,0 @@
#!/usr/bin/env python
import sys, os
import urllib2
import json, hashlib
def rsa_verify(message, signature, key):
from struct import pack
from hashlib import sha256
from sys import version_info
def b(x):
if version_info[0] == 2: return x
else: return x.encode('latin1')
assert(type(message) == type(b('')))
block_size = 0
n = key[0]
while n:
block_size += 1
n >>= 8
signature = pow(int(signature, 16), key[1], key[0])
raw_bytes = []
while signature:
raw_bytes.insert(0, pack("B", signature & 0xFF))
signature >>= 8
signature = (block_size - len(raw_bytes)) * b('\x00') + b('').join(raw_bytes)
if signature[0:2] != b('\x00\x01'): return False
signature = signature[2:]
if not b('\x00') in signature: return False
signature = signature[signature.index(b('\x00'))+1:]
if not signature.startswith(b('\x30\x31\x30\x0D\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20')): return False
signature = signature[19:]
if signature != sha256(message).digest(): return False
return True
sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
sys.stderr.write(u'From now on, get the binaries from http://rg3.github.com/youtube-dl/download.html, not from the git repository.\n\n')
raw_input()
filename = sys.argv[0]
UPDATE_URL = "http://rg3.github.io/youtube-dl/update/"
VERSION_URL = UPDATE_URL + 'LATEST_VERSION'
JSON_URL = UPDATE_URL + 'versions.json'
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
if not os.access(filename, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % filename)
exe = os.path.abspath(filename)
directory = os.path.dirname(exe)
if not os.access(directory, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % directory)
try:
versions_info = urllib2.urlopen(JSON_URL).read().decode('utf-8')
versions_info = json.loads(versions_info)
except:
sys.exit(u'ERROR: can\'t obtain versions info. Please try again later.')
if not 'signature' in versions_info:
sys.exit(u'ERROR: the versions file is not signed or corrupted. Aborting.')
signature = versions_info['signature']
del versions_info['signature']
if not rsa_verify(json.dumps(versions_info, sort_keys=True), signature, UPDATES_RSA_KEY):
sys.exit(u'ERROR: the versions file signature is invalid. Aborting.')
version = versions_info['versions'][versions_info['latest']]
try:
urlh = urllib2.urlopen(version['exe'][0])
newcontent = urlh.read()
urlh.close()
except (IOError, OSError) as err:
sys.exit('ERROR: unable to download latest version')
newcontent_hash = hashlib.sha256(newcontent).hexdigest()
if newcontent_hash != version['exe'][1]:
sys.exit(u'ERROR: the downloaded file hash does not match. Aborting.')
try:
with open(exe + '.new', 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError) as err:
sys.exit(u'ERROR: unable to write the new version')
try:
bat = os.path.join(directory, 'youtube-dl-updater.bat')
b = open(bat, 'w')
b.write("""
echo Updating youtube-dl...
ping 127.0.0.1 -n 5 -w 1000 > NUL
move /Y "%s.new" "%s"
del "%s"
\n""" %(exe, exe, bat))
b.close()
os.startfile(bat)
except (IOError, OSError) as err:
sys.exit('ERROR: unable to overwrite current version')
sys.stderr.write(u'Done! Now you can run youtube-dl.\n')

View File

@@ -1,4 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
import os import os
from os.path import dirname as dirn from os.path import dirname as dirn
import sys import sys

View File

@@ -4,7 +4,6 @@
from __future__ import print_function from __future__ import print_function
import os.path import os.path
import pkg_resources
import warnings import warnings
import sys import sys
@@ -103,7 +102,9 @@ setup(
"Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7", "Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3", "Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.3" "Programming Language :: Python :: 3.2",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
], ],
**params **params

View File

@@ -72,8 +72,10 @@ class FakeYDL(YoutubeDL):
def expect_warning(self, regex): def expect_warning(self, regex):
# Silence an expected warning matching a regex # Silence an expected warning matching a regex
old_report_warning = self.report_warning old_report_warning = self.report_warning
def report_warning(self, message): def report_warning(self, message):
if re.match(regex, message): return if re.match(regex, message):
return
old_report_warning(message) old_report_warning(message)
self.report_warning = types.MethodType(report_warning, self) self.report_warning = types.MethodType(report_warning, self)

View File

@@ -266,6 +266,7 @@ class TestFormatSelection(unittest.TestCase):
'ext': 'mp4', 'ext': 'mp4',
'width': None, 'width': None,
} }
def fname(templ): def fname(templ):
ydl = YoutubeDL({'outtmpl': templ}) ydl = YoutubeDL({'outtmpl': templ})
return ydl.prepare_filename(info) return ydl.prepare_filename(info)

View File

@@ -32,19 +32,19 @@ class TestAllURLsMatching(unittest.TestCase):
def test_youtube_playlist_matching(self): def test_youtube_playlist_matching(self):
assertPlaylist = lambda url: self.assertMatch(url, ['youtube:playlist']) assertPlaylist = lambda url: self.assertMatch(url, ['youtube:playlist'])
assertPlaylist('ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8') assertPlaylist('ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
assertPlaylist('UUBABnxM4Ar9ten8Mdjj1j0Q') #585 assertPlaylist('UUBABnxM4Ar9ten8Mdjj1j0Q') # 585
assertPlaylist('PL63F0C78739B09958') assertPlaylist('PL63F0C78739B09958')
assertPlaylist('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q') assertPlaylist('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q')
assertPlaylist('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8') assertPlaylist('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
assertPlaylist('https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC') assertPlaylist('https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC')
assertPlaylist('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') #668 assertPlaylist('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') # 668
self.assertFalse('youtube:playlist' in self.matching_ies('PLtS2H6bU1M')) self.assertFalse('youtube:playlist' in self.matching_ies('PLtS2H6bU1M'))
# Top tracks # Top tracks
assertPlaylist('https://www.youtube.com/playlist?list=MCUS.20142101') assertPlaylist('https://www.youtube.com/playlist?list=MCUS.20142101')
def test_youtube_matching(self): def test_youtube_matching(self):
self.assertTrue(YoutubeIE.suitable('PLtS2H6bU1M')) self.assertTrue(YoutubeIE.suitable('PLtS2H6bU1M'))
self.assertFalse(YoutubeIE.suitable('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012')) #668 self.assertFalse(YoutubeIE.suitable('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012')) # 668
self.assertMatch('http://youtu.be/BaW_jenozKc', ['youtube']) self.assertMatch('http://youtu.be/BaW_jenozKc', ['youtube'])
self.assertMatch('http://www.youtube.com/v/BaW_jenozKc', ['youtube']) self.assertMatch('http://www.youtube.com/v/BaW_jenozKc', ['youtube'])
self.assertMatch('https://youtube.googleapis.com/v/BaW_jenozKc', ['youtube']) self.assertMatch('https://youtube.googleapis.com/v/BaW_jenozKc', ['youtube'])

View File

@@ -40,18 +40,22 @@ from youtube_dl.extractor import get_info_extractor
RETRIES = 3 RETRIES = 3
class YoutubeDL(youtube_dl.YoutubeDL): class YoutubeDL(youtube_dl.YoutubeDL):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
self.to_stderr = self.to_screen self.to_stderr = self.to_screen
self.processed_info_dicts = [] self.processed_info_dicts = []
super(YoutubeDL, self).__init__(*args, **kwargs) super(YoutubeDL, self).__init__(*args, **kwargs)
def report_warning(self, message): def report_warning(self, message):
# Don't accept warnings during tests # Don't accept warnings during tests
raise ExtractorError(message) raise ExtractorError(message)
def process_info(self, info_dict): def process_info(self, info_dict):
self.processed_info_dicts.append(info_dict) self.processed_info_dicts.append(info_dict)
return super(YoutubeDL, self).process_info(info_dict) return super(YoutubeDL, self).process_info(info_dict)
def _file_md5(fn): def _file_md5(fn):
with open(fn, 'rb') as f: with open(fn, 'rb') as f:
return hashlib.md5(f.read()).hexdigest() return hashlib.md5(f.read()).hexdigest()
@@ -61,10 +65,13 @@ defs = gettestcases()
class TestDownload(unittest.TestCase): class TestDownload(unittest.TestCase):
maxDiff = None maxDiff = None
def setUp(self): def setUp(self):
self.defs = defs self.defs = defs
### Dynamically generate tests # Dynamically generate tests
def generator(test_case): def generator(test_case):
def test_template(self): def test_template(self):
@@ -90,7 +97,7 @@ def generator(test_case):
return return
for other_ie in other_ies: for other_ie in other_ies:
if not other_ie.working(): if not other_ie.working():
print_skipping(u'test depends on %sIE, marked as not WORKING' % other_ie.ie_key()) print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key())
return return
params = get_params(test_case.get('params', {})) params = get_params(test_case.get('params', {}))
@@ -101,6 +108,7 @@ def generator(test_case):
ydl = YoutubeDL(params, auto_init=False) ydl = YoutubeDL(params, auto_init=False)
ydl.add_default_info_extractors() ydl.add_default_info_extractors()
finished_hook_called = set() finished_hook_called = set()
def _hook(status): def _hook(status):
if status['status'] == 'finished': if status['status'] == 'finished':
finished_hook_called.add(status['filename']) finished_hook_called.add(status['filename'])
@@ -111,6 +119,7 @@ def generator(test_case):
return tc.get('file') or ydl.prepare_filename(tc.get('info_dict', {})) return tc.get('file') or ydl.prepare_filename(tc.get('info_dict', {}))
res_dict = None res_dict = None
def try_rm_tcs_files(tcs=None): def try_rm_tcs_files(tcs=None):
if tcs is None: if tcs is None:
tcs = test_cases tcs = test_cases
@@ -134,7 +143,7 @@ def generator(test_case):
raise raise
if try_num == RETRIES: if try_num == RETRIES:
report_warning(u'Failed due to network errors, skipping...') report_warning('Failed due to network errors, skipping...')
return return
print('Retrying: {0} failed tries\n\n##########\n\n'.format(try_num)) print('Retrying: {0} failed tries\n\n##########\n\n'.format(try_num))
@@ -206,7 +215,7 @@ def generator(test_case):
return test_template return test_template
### And add them to TestDownload # And add them to TestDownload
for n, test_case in enumerate(defs): for n, test_case in enumerate(defs):
test_method = generator(test_case) test_method = generator(test_case)
tname = 'test_' + str(test_case['name']) tname = 'test_' + str(test_case['name'])

View File

@@ -23,6 +23,7 @@ from youtube_dl.extractor import (
class BaseTestSubtitles(unittest.TestCase): class BaseTestSubtitles(unittest.TestCase):
url = None url = None
IE = None IE = None
def setUp(self): def setUp(self):
self.DL = FakeYDL() self.DL = FakeYDL()
self.ie = self.IE(self.DL) self.ie = self.IE(self.DL)

View File

@@ -9,14 +9,13 @@ rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
IGNORED_FILES = [ IGNORED_FILES = [
'setup.py', # http://bugs.python.org/issue13943 'setup.py', # http://bugs.python.org/issue13943
'conf.py',
'buildserver.py',
] ]
class TestUnicodeLiterals(unittest.TestCase): class TestUnicodeLiterals(unittest.TestCase):
def test_all_files(self): def test_all_files(self):
print('Skipping this test (not yet fully implemented)')
return
for dirpath, _, filenames in os.walk(rootDir): for dirpath, _, filenames in os.walk(rootDir):
for basename in filenames: for basename in filenames:
if not basename.endswith('.py'): if not basename.endswith('.py'):
@@ -30,10 +29,10 @@ class TestUnicodeLiterals(unittest.TestCase):
if "'" not in code and '"' not in code: if "'" not in code and '"' not in code:
continue continue
imps = 'from __future__ import unicode_literals' self.assertRegexpMatches(
self.assertTrue( code,
imps in code, r'(?:#.*\n*)?from __future__ import (?:[a-z_]+,\s*)*unicode_literals',
' %s missing in %s' % (imps, fn)) 'unicode_literals import missing in %s' % fn)
m = re.search(r'(?<=\s)u[\'"](?!\)|,|$)', code) m = re.search(r'(?<=\s)u[\'"](?!\)|,|$)', code)
if m is not None: if m is not None:

View File

@@ -45,9 +45,9 @@ from youtube_dl.utils import (
escape_rfc3986, escape_rfc3986,
escape_url, escape_url,
js_to_json, js_to_json,
get_filesystem_encoding,
intlist_to_bytes, intlist_to_bytes,
args_to_str, args_to_str,
parse_filesize,
) )
@@ -120,7 +120,7 @@ class TestUtil(unittest.TestCase):
self.assertEqual(orderedSet([1, 1, 2, 3, 4, 4, 5, 6, 7, 3, 5]), [1, 2, 3, 4, 5, 6, 7]) self.assertEqual(orderedSet([1, 1, 2, 3, 4, 4, 5, 6, 7, 3, 5]), [1, 2, 3, 4, 5, 6, 7])
self.assertEqual(orderedSet([]), []) self.assertEqual(orderedSet([]), [])
self.assertEqual(orderedSet([1]), [1]) self.assertEqual(orderedSet([1]), [1])
#keep the list ordered # keep the list ordered
self.assertEqual(orderedSet([135, 1, 1, 1]), [135, 1]) self.assertEqual(orderedSet([135, 1, 1, 1]), [135, 1])
def test_unescape_html(self): def test_unescape_html(self):
@@ -129,7 +129,7 @@ class TestUtil(unittest.TestCase):
unescapeHTML('&eacute;'), 'é') unescapeHTML('&eacute;'), 'é')
def test_daterange(self): def test_daterange(self):
_20century = DateRange("19000101","20000101") _20century = DateRange("19000101", "20000101")
self.assertFalse("17890714" in _20century) self.assertFalse("17890714" in _20century)
_ac = DateRange("00010101") _ac = DateRange("00010101")
self.assertTrue("19690721" in _ac) self.assertTrue("19690721" in _ac)
@@ -171,7 +171,7 @@ class TestUtil(unittest.TestCase):
self.assertEqual(find('media:song/url').text, 'http://server.com/download.mp3') self.assertEqual(find('media:song/url').text, 'http://server.com/download.mp3')
def test_smuggle_url(self): def test_smuggle_url(self):
data = {u"ö": u"ö", u"abc": [3]} data = {"ö": "ö", "abc": [3]}
url = 'https://foo.bar/baz?x=y#a' url = 'https://foo.bar/baz?x=y#a'
smug_url = smuggle_url(url, data) smug_url = smuggle_url(url, data)
unsmug_url, unsmug_data = unsmuggle_url(smug_url) unsmug_url, unsmug_data = unsmuggle_url(smug_url)
@@ -368,5 +368,14 @@ class TestUtil(unittest.TestCase):
'foo ba/r -baz \'2 be\' \'\'' 'foo ba/r -baz \'2 be\' \'\''
) )
def test_parse_filesize(self):
self.assertEqual(parse_filesize(None), None)
self.assertEqual(parse_filesize(''), None)
self.assertEqual(parse_filesize('91 B'), 91)
self.assertEqual(parse_filesize('foobar'), None)
self.assertEqual(parse_filesize('2 MiB'), 2097152)
self.assertEqual(parse_filesize('5 GB'), 5000000000)
self.assertEqual(parse_filesize('1.2Tb'), 1200000000000)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -1,5 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -31,19 +32,18 @@ params = get_params({
}) })
TEST_ID = 'gr51aVj-mLg' TEST_ID = 'gr51aVj-mLg'
ANNOTATIONS_FILE = TEST_ID + '.flv.annotations.xml' ANNOTATIONS_FILE = TEST_ID + '.flv.annotations.xml'
EXPECTED_ANNOTATIONS = ['Speech bubble', 'Note', 'Title', 'Spotlight', 'Label'] EXPECTED_ANNOTATIONS = ['Speech bubble', 'Note', 'Title', 'Spotlight', 'Label']
class TestAnnotations(unittest.TestCase): class TestAnnotations(unittest.TestCase):
def setUp(self): def setUp(self):
# Clear old files # Clear old files
self.tearDown() self.tearDown()
def test_info_json(self): def test_info_json(self):
expected = list(EXPECTED_ANNOTATIONS) #Two annotations could have the same text. expected = list(EXPECTED_ANNOTATIONS) # Two annotations could have the same text.
ie = youtube_dl.extractor.YoutubeIE() ie = youtube_dl.extractor.YoutubeIE()
ydl = YoutubeDL(params) ydl = YoutubeDL(params)
ydl.add_info_extractor(ie) ydl.add_info_extractor(ie)
@@ -59,19 +59,18 @@ class TestAnnotations(unittest.TestCase):
self.assertEqual(annotationsTag.tag, 'annotations') self.assertEqual(annotationsTag.tag, 'annotations')
annotations = annotationsTag.findall('annotation') annotations = annotationsTag.findall('annotation')
#Not all the annotations have TEXT children and the annotations are returned unsorted. # Not all the annotations have TEXT children and the annotations are returned unsorted.
for a in annotations: for a in annotations:
self.assertEqual(a.tag, 'annotation') self.assertEqual(a.tag, 'annotation')
if a.get('type') == 'text': if a.get('type') == 'text':
textTag = a.find('TEXT') textTag = a.find('TEXT')
text = textTag.text text = textTag.text
self.assertTrue(text in expected) #assertIn only added in python 2.7 self.assertTrue(text in expected) # assertIn only added in python 2.7
#remove the first occurance, there could be more than one annotation with the same text # remove the first occurance, there could be more than one annotation with the same text
expected.remove(text) expected.remove(text)
#We should have seen (and removed) all the expected annotation texts. # We should have seen (and removed) all the expected annotation texts.
self.assertEqual(len(expected), 0, 'Not all expected annotations were found.') self.assertEqual(len(expected), 0, 'Not all expected annotations were found.')
def tearDown(self): def tearDown(self):
try_rm(ANNOTATIONS_FILE) try_rm(ANNOTATIONS_FILE)

View File

@@ -1,5 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -32,7 +33,7 @@ params = get_params({
TEST_ID = 'BaW_jenozKc' TEST_ID = 'BaW_jenozKc'
INFO_JSON_FILE = TEST_ID + '.info.json' INFO_JSON_FILE = TEST_ID + '.info.json'
DESCRIPTION_FILE = TEST_ID + '.mp4.description' DESCRIPTION_FILE = TEST_ID + '.mp4.description'
EXPECTED_DESCRIPTION = u'''test chars: "'/\ä↭𝕐 EXPECTED_DESCRIPTION = '''test chars: "'/\ä↭𝕐
test URL: https://github.com/rg3/youtube-dl/issues/1892 test URL: https://github.com/rg3/youtube-dl/issues/1892
This is a test video for youtube-dl. This is a test video for youtube-dl.
@@ -53,11 +54,11 @@ class TestInfoJSON(unittest.TestCase):
self.assertTrue(os.path.exists(INFO_JSON_FILE)) self.assertTrue(os.path.exists(INFO_JSON_FILE))
with io.open(INFO_JSON_FILE, 'r', encoding='utf-8') as jsonf: with io.open(INFO_JSON_FILE, 'r', encoding='utf-8') as jsonf:
jd = json.load(jsonf) jd = json.load(jsonf)
self.assertEqual(jd['upload_date'], u'20121002') self.assertEqual(jd['upload_date'], '20121002')
self.assertEqual(jd['description'], EXPECTED_DESCRIPTION) self.assertEqual(jd['description'], EXPECTED_DESCRIPTION)
self.assertEqual(jd['id'], TEST_ID) self.assertEqual(jd['id'], TEST_ID)
self.assertEqual(jd['extractor'], 'youtube') self.assertEqual(jd['extractor'], 'youtube')
self.assertEqual(jd['title'], u'''youtube-dl test video "'/\ä↭𝕐''') self.assertEqual(jd['title'], '''youtube-dl test video "'/\ä↭𝕐''')
self.assertEqual(jd['uploader'], 'Philipp Hagemeister') self.assertEqual(jd['uploader'], 'Philipp Hagemeister')
self.assertTrue(os.path.exists(DESCRIPTION_FILE)) self.assertTrue(os.path.exists(DESCRIPTION_FILE))

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
@@ -12,10 +13,6 @@ from test.helper import FakeYDL
from youtube_dl.extractor import ( from youtube_dl.extractor import (
YoutubePlaylistIE, YoutubePlaylistIE,
YoutubeIE, YoutubeIE,
YoutubeChannelIE,
YoutubeShowIE,
YoutubeTopListIE,
YoutubeSearchURLIE,
) )

View File

@@ -29,7 +29,6 @@ from .compat import (
compat_str, compat_str,
compat_urllib_error, compat_urllib_error,
compat_urllib_request, compat_urllib_request,
shlex_quote,
) )
from .utils import ( from .utils import (
escape_url, escape_url,
@@ -700,14 +699,17 @@ class YoutubeDL(object):
self.report_warning( self.report_warning(
'Extractor %s returned a compat_list result. ' 'Extractor %s returned a compat_list result. '
'It needs to be updated.' % ie_result.get('extractor')) 'It needs to be updated.' % ie_result.get('extractor'))
def _fixup(r): def _fixup(r):
self.add_extra_info(r, self.add_extra_info(
r,
{ {
'extractor': ie_result['extractor'], 'extractor': ie_result['extractor'],
'webpage_url': ie_result['webpage_url'], 'webpage_url': ie_result['webpage_url'],
'webpage_url_basename': url_basename(ie_result['webpage_url']), 'webpage_url_basename': url_basename(ie_result['webpage_url']),
'extractor_key': ie_result['extractor_key'], 'extractor_key': ie_result['extractor_key'],
}) }
)
return r return r
ie_result['entries'] = [ ie_result['entries'] = [
self.process_ie_result(_fixup(r), download, extra_info) self.process_ie_result(_fixup(r), download, extra_info)
@@ -1111,7 +1113,7 @@ class YoutubeDL(object):
for url in url_list: for url in url_list:
try: try:
#It also downloads the videos # It also downloads the videos
res = self.extract_info(url) res = self.extract_info(url)
except UnavailableVideoError: except UnavailableVideoError:
self.report_error('unable to download video') self.report_error('unable to download video')
@@ -1428,4 +1430,3 @@ class YoutubeDL(object):
if encoding is None: if encoding is None:
encoding = preferredencoding() encoding = preferredencoding()
return encoding return encoding

View File

@@ -76,10 +76,10 @@ def _real_main(argv=None):
if opts.headers is not None: if opts.headers is not None:
for h in opts.headers: for h in opts.headers:
if h.find(':', 1) < 0: if h.find(':', 1) < 0:
parser.error('wrong header formatting, it should be key:value, not "%s"'%h) parser.error('wrong header formatting, it should be key:value, not "%s"' % h)
key, value = h.split(':', 2) key, value = h.split(':', 2)
if opts.verbose: if opts.verbose:
write_string('[debug] Adding header from command line option %s:%s\n'%(key, value)) write_string('[debug] Adding header from command line option %s:%s\n' % (key, value))
std_headers[key] = value std_headers[key] = value
# Dump user agent # Dump user agent
@@ -128,7 +128,6 @@ def _real_main(argv=None):
compat_print(desc) compat_print(desc)
sys.exit(0) sys.exit(0)
# Conflicting, missing and erroneous options # Conflicting, missing and erroneous options
if opts.usenetrc and (opts.username is not None or opts.password is not None): if opts.usenetrc and (opts.username is not None or opts.password is not None):
parser.error('using .netrc conflicts with giving username/password') parser.error('using .netrc conflicts with giving username/password')
@@ -190,14 +189,14 @@ def _real_main(argv=None):
# --all-sub automatically sets --write-sub if --write-auto-sub is not given # --all-sub automatically sets --write-sub if --write-auto-sub is not given
# this was the old behaviour if only --all-sub was given. # this was the old behaviour if only --all-sub was given.
if opts.allsubtitles and (opts.writeautomaticsub == False): if opts.allsubtitles and not opts.writeautomaticsub:
opts.writesubtitles = True opts.writesubtitles = True
if sys.version_info < (3,): if sys.version_info < (3,):
# In Python 2, sys.argv is a bytestring (also note http://bugs.python.org/issue2128 for Windows systems) # In Python 2, sys.argv is a bytestring (also note http://bugs.python.org/issue2128 for Windows systems)
if opts.outtmpl is not None: if opts.outtmpl is not None:
opts.outtmpl = opts.outtmpl.decode(preferredencoding()) opts.outtmpl = opts.outtmpl.decode(preferredencoding())
outtmpl =((opts.outtmpl is not None and opts.outtmpl) outtmpl = ((opts.outtmpl is not None and opts.outtmpl)
or (opts.format == '-1' and opts.usetitle and '%(title)s-%(id)s-%(format)s.%(ext)s') or (opts.format == '-1' and opts.usetitle and '%(title)s-%(id)s-%(format)s.%(ext)s')
or (opts.format == '-1' and '%(id)s-%(format)s.%(ext)s') or (opts.format == '-1' and '%(id)s-%(format)s.%(ext)s')
or (opts.usetitle and opts.autonumber and '%(autonumber)s-%(title)s-%(id)s.%(ext)s') or (opts.usetitle and opts.autonumber and '%(autonumber)s-%(title)s-%(id)s.%(ext)s')
@@ -317,7 +316,6 @@ def _real_main(argv=None):
ydl.add_post_processor(FFmpegAudioFixPP()) ydl.add_post_processor(FFmpegAudioFixPP())
ydl.add_post_processor(AtomicParsleyPP()) ydl.add_post_processor(AtomicParsleyPP())
# Please keep ExecAfterDownload towards the bottom as it allows the user to modify the final file in any way. # Please keep ExecAfterDownload towards the bottom as it allows the user to modify the final file in any way.
# So if the user is able to remove the file before your postprocessor runs it might cause a few problems. # So if the user is able to remove the file before your postprocessor runs it might cause a few problems.
if opts.exec_cmd: if opts.exec_cmd:

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals
# Execute with # Execute with
# $ python youtube_dl/__main__.py (2.6+) # $ python youtube_dl/__main__.py (2.6+)

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
__all__ = ['aes_encrypt', 'key_expansion', 'aes_ctr_decrypt', 'aes_cbc_decrypt', 'aes_decrypt_text'] __all__ = ['aes_encrypt', 'key_expansion', 'aes_ctr_decrypt', 'aes_cbc_decrypt', 'aes_decrypt_text']
import base64 import base64
@@ -7,6 +9,7 @@ from .utils import bytes_to_intlist, intlist_to_bytes
BLOCK_SIZE_BYTES = 16 BLOCK_SIZE_BYTES = 16
def aes_ctr_decrypt(data, key, counter): def aes_ctr_decrypt(data, key, counter):
""" """
Decrypt with aes in counter mode Decrypt with aes in counter mode
@@ -20,11 +23,11 @@ def aes_ctr_decrypt(data, key, counter):
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES)) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
decrypted_data=[] decrypted_data = []
for i in range(block_count): for i in range(block_count):
counter_block = counter.next_value() counter_block = counter.next_value()
block = data[i*BLOCK_SIZE_BYTES : (i+1)*BLOCK_SIZE_BYTES] block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
block += [0]*(BLOCK_SIZE_BYTES - len(block)) block += [0] * (BLOCK_SIZE_BYTES - len(block))
cipher_counter_block = aes_encrypt(counter_block, expanded_key) cipher_counter_block = aes_encrypt(counter_block, expanded_key)
decrypted_data += xor(block, cipher_counter_block) decrypted_data += xor(block, cipher_counter_block)
@@ -32,6 +35,7 @@ def aes_ctr_decrypt(data, key, counter):
return decrypted_data return decrypted_data
def aes_cbc_decrypt(data, key, iv): def aes_cbc_decrypt(data, key, iv):
""" """
Decrypt with aes in CBC mode Decrypt with aes in CBC mode
@@ -44,11 +48,11 @@ def aes_cbc_decrypt(data, key, iv):
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES)) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
decrypted_data=[] decrypted_data = []
previous_cipher_block = iv previous_cipher_block = iv
for i in range(block_count): for i in range(block_count):
block = data[i*BLOCK_SIZE_BYTES : (i+1)*BLOCK_SIZE_BYTES] block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
block += [0]*(BLOCK_SIZE_BYTES - len(block)) block += [0] * (BLOCK_SIZE_BYTES - len(block))
decrypted_block = aes_decrypt(block, expanded_key) decrypted_block = aes_decrypt(block, expanded_key)
decrypted_data += xor(decrypted_block, previous_cipher_block) decrypted_data += xor(decrypted_block, previous_cipher_block)
@@ -57,6 +61,7 @@ def aes_cbc_decrypt(data, key, iv):
return decrypted_data return decrypted_data
def key_expansion(data): def key_expansion(data):
""" """
Generate key schedule Generate key schedule
@@ -73,24 +78,25 @@ def key_expansion(data):
temp = data[-4:] temp = data[-4:]
temp = key_schedule_core(temp, rcon_iteration) temp = key_schedule_core(temp, rcon_iteration)
rcon_iteration += 1 rcon_iteration += 1
data += xor(temp, data[-key_size_bytes : 4-key_size_bytes]) data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
for _ in range(3): for _ in range(3):
temp = data[-4:] temp = data[-4:]
data += xor(temp, data[-key_size_bytes : 4-key_size_bytes]) data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
if key_size_bytes == 32: if key_size_bytes == 32:
temp = data[-4:] temp = data[-4:]
temp = sub_bytes(temp) temp = sub_bytes(temp)
data += xor(temp, data[-key_size_bytes : 4-key_size_bytes]) data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0): for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0):
temp = data[-4:] temp = data[-4:]
data += xor(temp, data[-key_size_bytes : 4-key_size_bytes]) data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
data = data[:expanded_key_size_bytes] data = data[:expanded_key_size_bytes]
return data return data
def aes_encrypt(data, expanded_key): def aes_encrypt(data, expanded_key):
""" """
Encrypt one block with aes Encrypt one block with aes
@@ -102,15 +108,16 @@ def aes_encrypt(data, expanded_key):
rounds = len(expanded_key) // BLOCK_SIZE_BYTES - 1 rounds = len(expanded_key) // BLOCK_SIZE_BYTES - 1
data = xor(data, expanded_key[:BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[:BLOCK_SIZE_BYTES])
for i in range(1, rounds+1): for i in range(1, rounds + 1):
data = sub_bytes(data) data = sub_bytes(data)
data = shift_rows(data) data = shift_rows(data)
if i != rounds: if i != rounds:
data = mix_columns(data) data = mix_columns(data)
data = xor(data, expanded_key[i*BLOCK_SIZE_BYTES : (i+1)*BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES])
return data return data
def aes_decrypt(data, expanded_key): def aes_decrypt(data, expanded_key):
""" """
Decrypt one block with aes Decrypt one block with aes
@@ -122,7 +129,7 @@ def aes_decrypt(data, expanded_key):
rounds = len(expanded_key) // BLOCK_SIZE_BYTES - 1 rounds = len(expanded_key) // BLOCK_SIZE_BYTES - 1
for i in range(rounds, 0, -1): for i in range(rounds, 0, -1):
data = xor(data, expanded_key[i*BLOCK_SIZE_BYTES : (i+1)*BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES])
if i != rounds: if i != rounds:
data = mix_columns_inv(data) data = mix_columns_inv(data)
data = shift_rows_inv(data) data = shift_rows_inv(data)
@@ -131,6 +138,7 @@ def aes_decrypt(data, expanded_key):
return data return data
def aes_decrypt_text(data, password, key_size_bytes): def aes_decrypt_text(data, password, key_size_bytes):
""" """
Decrypt text Decrypt text
@@ -149,14 +157,15 @@ def aes_decrypt_text(data, password, key_size_bytes):
data = bytes_to_intlist(base64.b64decode(data)) data = bytes_to_intlist(base64.b64decode(data))
password = bytes_to_intlist(password.encode('utf-8')) password = bytes_to_intlist(password.encode('utf-8'))
key = password[:key_size_bytes] + [0]*(key_size_bytes - len(password)) key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password))
key = aes_encrypt(key[:BLOCK_SIZE_BYTES], key_expansion(key)) * (key_size_bytes // BLOCK_SIZE_BYTES) key = aes_encrypt(key[:BLOCK_SIZE_BYTES], key_expansion(key)) * (key_size_bytes // BLOCK_SIZE_BYTES)
nonce = data[:NONCE_LENGTH_BYTES] nonce = data[:NONCE_LENGTH_BYTES]
cipher = data[NONCE_LENGTH_BYTES:] cipher = data[NONCE_LENGTH_BYTES:]
class Counter: class Counter:
__value = nonce + [0]*(BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES) __value = nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES)
def next_value(self): def next_value(self):
temp = self.__value temp = self.__value
self.__value = inc(self.__value) self.__value = inc(self.__value)
@@ -200,14 +209,14 @@ SBOX_INV = (0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf, 0x40, 0xa3, 0x
0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef,
0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61,
0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d) 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d)
MIX_COLUMN_MATRIX = ((0x2,0x3,0x1,0x1), MIX_COLUMN_MATRIX = ((0x2, 0x3, 0x1, 0x1),
(0x1,0x2,0x3,0x1), (0x1, 0x2, 0x3, 0x1),
(0x1,0x1,0x2,0x3), (0x1, 0x1, 0x2, 0x3),
(0x3,0x1,0x1,0x2)) (0x3, 0x1, 0x1, 0x2))
MIX_COLUMN_MATRIX_INV = ((0xE,0xB,0xD,0x9), MIX_COLUMN_MATRIX_INV = ((0xE, 0xB, 0xD, 0x9),
(0x9,0xE,0xB,0xD), (0x9, 0xE, 0xB, 0xD),
(0xD,0x9,0xE,0xB), (0xD, 0x9, 0xE, 0xB),
(0xB,0xD,0x9,0xE)) (0xB, 0xD, 0x9, 0xE))
RIJNDAEL_EXP_TABLE = (0x01, 0x03, 0x05, 0x0F, 0x11, 0x33, 0x55, 0xFF, 0x1A, 0x2E, 0x72, 0x96, 0xA1, 0xF8, 0x13, 0x35, RIJNDAEL_EXP_TABLE = (0x01, 0x03, 0x05, 0x0F, 0x11, 0x33, 0x55, 0xFF, 0x1A, 0x2E, 0x72, 0x96, 0xA1, 0xF8, 0x13, 0x35,
0x5F, 0xE1, 0x38, 0x48, 0xD8, 0x73, 0x95, 0xA4, 0xF7, 0x02, 0x06, 0x0A, 0x1E, 0x22, 0x66, 0xAA, 0x5F, 0xE1, 0x38, 0x48, 0xD8, 0x73, 0x95, 0xA4, 0xF7, 0x02, 0x06, 0x0A, 0x1E, 0x22, 0x66, 0xAA,
0xE5, 0x34, 0x5C, 0xE4, 0x37, 0x59, 0xEB, 0x26, 0x6A, 0xBE, 0xD9, 0x70, 0x90, 0xAB, 0xE6, 0x31, 0xE5, 0x34, 0x5C, 0xE4, 0x37, 0x59, 0xEB, 0x26, 0x6A, 0xBE, 0xD9, 0x70, 0x90, 0xAB, 0xE6, 0x31,
@@ -241,15 +250,19 @@ RIJNDAEL_LOG_TABLE = (0x00, 0x00, 0x19, 0x01, 0x32, 0x02, 0x1a, 0xc6, 0x4b, 0xc7
0x44, 0x11, 0x92, 0xd9, 0x23, 0x20, 0x2e, 0x89, 0xb4, 0x7c, 0xb8, 0x26, 0x77, 0x99, 0xe3, 0xa5, 0x44, 0x11, 0x92, 0xd9, 0x23, 0x20, 0x2e, 0x89, 0xb4, 0x7c, 0xb8, 0x26, 0x77, 0x99, 0xe3, 0xa5,
0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07) 0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07)
def sub_bytes(data): def sub_bytes(data):
return [SBOX[x] for x in data] return [SBOX[x] for x in data]
def sub_bytes_inv(data): def sub_bytes_inv(data):
return [SBOX_INV[x] for x in data] return [SBOX_INV[x] for x in data]
def rotate(data): def rotate(data):
return data[1:] + [data[0]] return data[1:] + [data[0]]
def key_schedule_core(data, rcon_iteration): def key_schedule_core(data, rcon_iteration):
data = rotate(data) data = rotate(data)
data = sub_bytes(data) data = sub_bytes(data)
@@ -257,14 +270,17 @@ def key_schedule_core(data, rcon_iteration):
return data return data
def xor(data1, data2): def xor(data1, data2):
return [x^y for x, y in zip(data1, data2)] return [x ^ y for x, y in zip(data1, data2)]
def rijndael_mul(a, b): def rijndael_mul(a, b):
if(a==0 or b==0): if(a == 0 or b == 0):
return 0 return 0
return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF] return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF]
def mix_column(data, matrix): def mix_column(data, matrix):
data_mixed = [] data_mixed = []
for row in range(4): for row in range(4):
@@ -275,33 +291,38 @@ def mix_column(data, matrix):
data_mixed.append(mixed) data_mixed.append(mixed)
return data_mixed return data_mixed
def mix_columns(data, matrix=MIX_COLUMN_MATRIX): def mix_columns(data, matrix=MIX_COLUMN_MATRIX):
data_mixed = [] data_mixed = []
for i in range(4): for i in range(4):
column = data[i*4 : (i+1)*4] column = data[i * 4: (i + 1) * 4]
data_mixed += mix_column(column, matrix) data_mixed += mix_column(column, matrix)
return data_mixed return data_mixed
def mix_columns_inv(data): def mix_columns_inv(data):
return mix_columns(data, MIX_COLUMN_MATRIX_INV) return mix_columns(data, MIX_COLUMN_MATRIX_INV)
def shift_rows(data): def shift_rows(data):
data_shifted = [] data_shifted = []
for column in range(4): for column in range(4):
for row in range(4): for row in range(4):
data_shifted.append( data[((column + row) & 0b11) * 4 + row] ) data_shifted.append(data[((column + row) & 0b11) * 4 + row])
return data_shifted return data_shifted
def shift_rows_inv(data): def shift_rows_inv(data):
data_shifted = [] data_shifted = []
for column in range(4): for column in range(4):
for row in range(4): for row in range(4):
data_shifted.append( data[((column - row) & 0b11) * 4 + row] ) data_shifted.append(data[((column - row) & 0b11) * 4 + row])
return data_shifted return data_shifted
def inc(data): def inc(data):
data = data[:] # copy data = data[:] # copy
for i in range(len(data)-1,-1,-1): for i in range(len(data) - 1, -1, -1):
if data[i] == 255: if data[i] == 255:
data[i] = 0 data[i] = 0
else: else:

View File

@@ -182,8 +182,10 @@ except ImportError: # Python < 3.3
def compat_ord(c): def compat_ord(c):
if type(c) is int: return c if type(c) is int:
else: return ord(c) return c
else:
return ord(c)
if sys.version_info >= (3, 0): if sys.version_info >= (3, 0):
@@ -254,7 +256,7 @@ else:
drive = '' drive = ''
userhome = os.path.join(drive, compat_getenv('HOMEPATH')) userhome = os.path.join(drive, compat_getenv('HOMEPATH'))
if i != 1: #~user if i != 1: # ~user
userhome = os.path.join(os.path.dirname(userhome), path[1:i]) userhome = os.path.join(os.path.dirname(userhome), path[1:i])
return userhome + path[i:] return userhome + path[i:]
@@ -268,7 +270,7 @@ if sys.version_info < (3, 0):
print(s.encode(preferredencoding(), 'xmlcharrefreplace')) print(s.encode(preferredencoding(), 'xmlcharrefreplace'))
else: else:
def compat_print(s): def compat_print(s):
assert type(s) == type(u'') assert isinstance(s, compat_str)
print(s) print(s)

View File

@@ -30,3 +30,8 @@ def get_suitable_downloader(info_dict):
return F4mFD return F4mFD
else: else:
return HttpFD return HttpFD
__all__ = [
'get_suitable_downloader',
'FileDownloader',
]

View File

@@ -55,7 +55,7 @@ class FlvReader(io.BytesIO):
if size == 1: if size == 1:
real_size = self.read_unsigned_long_long() real_size = self.read_unsigned_long_long()
header_end = 16 header_end = 16
return real_size, box_type, self.read(real_size-header_end) return real_size, box_type, self.read(real_size - header_end)
def read_asrt(self): def read_asrt(self):
# version # version
@@ -180,7 +180,7 @@ def build_fragments_list(boot_info):
n_frags = segment_run_entry[1] n_frags = segment_run_entry[1]
fragment_run_entry_table = boot_info['fragments'][0]['fragments'] fragment_run_entry_table = boot_info['fragments'][0]['fragments']
first_frag_number = fragment_run_entry_table[0]['first'] first_frag_number = fragment_run_entry_table[0]['first']
for (i, frag_number) in zip(range(1, n_frags+1), itertools.count(first_frag_number)): for (i, frag_number) in zip(range(1, n_frags + 1), itertools.count(first_frag_number)):
res.append((1, frag_number)) res.append((1, frag_number))
return res return res
@@ -225,13 +225,15 @@ class F4mFD(FileDownloader):
self.to_screen('[download] Downloading f4m manifest') self.to_screen('[download] Downloading f4m manifest')
manifest = self.ydl.urlopen(man_url).read() manifest = self.ydl.urlopen(man_url).read()
self.report_destination(filename) self.report_destination(filename)
http_dl = HttpQuietDownloader(self.ydl, http_dl = HttpQuietDownloader(
self.ydl,
{ {
'continuedl': True, 'continuedl': True,
'quiet': True, 'quiet': True,
'noprogress': True, 'noprogress': True,
'test': self.params.get('test', False), 'test': self.params.get('test', False),
}) }
)
doc = etree.fromstring(manifest) doc = etree.fromstring(manifest)
formats = [(int(f.attrib.get('bitrate', -1)), f) for f in doc.findall(_add_ns('media'))] formats = [(int(f.attrib.get('bitrate', -1)), f) for f in doc.findall(_add_ns('media'))]

View File

@@ -28,14 +28,14 @@ class HlsFD(FileDownloader):
if check_executable(program, ['-version']): if check_executable(program, ['-version']):
break break
else: else:
self.report_error(u'm3u8 download detected but ffmpeg or avconv could not be found. Please install one.') self.report_error('m3u8 download detected but ffmpeg or avconv could not be found. Please install one.')
return False return False
cmd = [program] + args cmd = [program] + args
retval = subprocess.call(cmd) retval = subprocess.call(cmd)
if retval == 0: if retval == 0:
fsize = os.path.getsize(encodeFilename(tmpfilename)) fsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen(u'\r[%s] %s bytes' % (cmd[0], fsize)) self.to_screen('\r[%s] %s bytes' % (cmd[0], fsize))
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
self._hook_progress({ self._hook_progress({
'downloaded_bytes': fsize, 'downloaded_bytes': fsize,
@@ -45,8 +45,8 @@ class HlsFD(FileDownloader):
}) })
return True return True
else: else:
self.to_stderr(u"\n") self.to_stderr('\n')
self.report_error(u'%s exited with code %d' % (program, retval)) self.report_error('%s exited with code %d' % (program, retval))
return False return False
@@ -101,4 +101,3 @@ class NativeHlsFD(FileDownloader):
}) })
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
return True return True

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
import os import os
import time import time
@@ -106,7 +108,7 @@ class HttpFD(FileDownloader):
self.report_retry(count, retries) self.report_retry(count, retries)
if count > retries: if count > retries:
self.report_error(u'giving up after %s retries' % retries) self.report_error('giving up after %s retries' % retries)
return False return False
data_len = data.info().get('Content-length', None) data_len = data.info().get('Content-length', None)
@@ -124,10 +126,10 @@ class HttpFD(FileDownloader):
min_data_len = self.params.get("min_filesize", None) min_data_len = self.params.get("min_filesize", None)
max_data_len = self.params.get("max_filesize", None) max_data_len = self.params.get("max_filesize", None)
if min_data_len is not None and data_len < min_data_len: if min_data_len is not None and data_len < min_data_len:
self.to_screen(u'\r[download] File is smaller than min-filesize (%s bytes < %s bytes). Aborting.' % (data_len, min_data_len)) self.to_screen('\r[download] File is smaller than min-filesize (%s bytes < %s bytes). Aborting.' % (data_len, min_data_len))
return False return False
if max_data_len is not None and data_len > max_data_len: if max_data_len is not None and data_len > max_data_len:
self.to_screen(u'\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len)) self.to_screen('\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len))
return False return False
data_len_str = format_bytes(data_len) data_len_str = format_bytes(data_len)
@@ -151,13 +153,13 @@ class HttpFD(FileDownloader):
filename = self.undo_temp_name(tmpfilename) filename = self.undo_temp_name(tmpfilename)
self.report_destination(filename) self.report_destination(filename)
except (OSError, IOError) as err: except (OSError, IOError) as err:
self.report_error(u'unable to open for writing: %s' % str(err)) self.report_error('unable to open for writing: %s' % str(err))
return False return False
try: try:
stream.write(data_block) stream.write(data_block)
except (IOError, OSError) as err: except (IOError, OSError) as err:
self.to_stderr(u"\n") self.to_stderr('\n')
self.report_error(u'unable to write data: %s' % str(err)) self.report_error('unable to write data: %s' % str(err))
return False return False
if not self.params.get('noresizebuffer', False): if not self.params.get('noresizebuffer', False):
block_size = self.best_block_size(after - before, len(data_block)) block_size = self.best_block_size(after - before, len(data_block))
@@ -188,10 +190,10 @@ class HttpFD(FileDownloader):
self.slow_down(start, byte_counter - resume_len) self.slow_down(start, byte_counter - resume_len)
if stream is None: if stream is None:
self.to_stderr(u"\n") self.to_stderr('\n')
self.report_error(u'Did not get any data blocks') self.report_error('Did not get any data blocks')
return False return False
if tmpfilename != u'-': if tmpfilename != '-':
stream.close() stream.close()
self.report_finish(data_len_str, (time.time() - start)) self.report_finish(data_len_str, (time.time() - start))
if data_len is not None and byte_counter != data_len: if data_len is not None and byte_counter != data_len:

View File

@@ -1,7 +1,10 @@
from __future__ import unicode_literals
import os import os
import subprocess import subprocess
from .common import FileDownloader from .common import FileDownloader
from ..compat import compat_subprocess_get_DEVNULL
from ..utils import ( from ..utils import (
encodeFilename, encodeFilename,
) )
@@ -13,19 +16,23 @@ class MplayerFD(FileDownloader):
self.report_destination(filename) self.report_destination(filename)
tmpfilename = self.temp_name(filename) tmpfilename = self.temp_name(filename)
args = ['mplayer', '-really-quiet', '-vo', 'null', '-vc', 'dummy', '-dumpstream', '-dumpfile', tmpfilename, url] args = [
'mplayer', '-really-quiet', '-vo', 'null', '-vc', 'dummy',
'-dumpstream', '-dumpfile', tmpfilename, url]
# Check for mplayer first # Check for mplayer first
try: try:
subprocess.call(['mplayer', '-h'], stdout=(open(os.path.devnull, 'w')), stderr=subprocess.STDOUT) subprocess.call(
['mplayer', '-h'],
stdout=compat_subprocess_get_DEVNULL(), stderr=subprocess.STDOUT)
except (OSError, IOError): except (OSError, IOError):
self.report_error(u'MMS or RTSP download detected but "%s" could not be run' % args[0]) self.report_error('MMS or RTSP download detected but "%s" could not be run' % args[0])
return False return False
# Download using mplayer. # Download using mplayer.
retval = subprocess.call(args) retval = subprocess.call(args)
if retval == 0: if retval == 0:
fsize = os.path.getsize(encodeFilename(tmpfilename)) fsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen(u'\r[%s] %s bytes' % (args[0], fsize)) self.to_screen('\r[%s] %s bytes' % (args[0], fsize))
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
self._hook_progress({ self._hook_progress({
'downloaded_bytes': fsize, 'downloaded_bytes': fsize,
@@ -35,6 +42,6 @@ class MplayerFD(FileDownloader):
}) })
return True return True
else: else:
self.to_stderr(u"\n") self.to_stderr('\n')
self.report_error(u'mplayer exited with code %d' % retval) self.report_error('mplayer exited with code %d' % retval)
return False return False

View File

@@ -46,13 +46,13 @@ class RtmpFD(FileDownloader):
continue continue
mobj = re.search(r'([0-9]+\.[0-9]{3}) kB / [0-9]+\.[0-9]{2} sec \(([0-9]{1,2}\.[0-9])%\)', line) mobj = re.search(r'([0-9]+\.[0-9]{3}) kB / [0-9]+\.[0-9]{2} sec \(([0-9]{1,2}\.[0-9])%\)', line)
if mobj: if mobj:
downloaded_data_len = int(float(mobj.group(1))*1024) downloaded_data_len = int(float(mobj.group(1)) * 1024)
percent = float(mobj.group(2)) percent = float(mobj.group(2))
if not resume_percent: if not resume_percent:
resume_percent = percent resume_percent = percent
resume_downloaded_data_len = downloaded_data_len resume_downloaded_data_len = downloaded_data_len
eta = self.calc_eta(start, time.time(), 100-resume_percent, percent-resume_percent) eta = self.calc_eta(start, time.time(), 100 - resume_percent, percent - resume_percent)
speed = self.calc_speed(start, time.time(), downloaded_data_len-resume_downloaded_data_len) speed = self.calc_speed(start, time.time(), downloaded_data_len - resume_downloaded_data_len)
data_len = None data_len = None
if percent > 0: if percent > 0:
data_len = int(downloaded_data_len * 100 / percent) data_len = int(downloaded_data_len * 100 / percent)
@@ -72,7 +72,7 @@ class RtmpFD(FileDownloader):
# no percent for live streams # no percent for live streams
mobj = re.search(r'([0-9]+\.[0-9]{3}) kB / [0-9]+\.[0-9]{2} sec', line) mobj = re.search(r'([0-9]+\.[0-9]{3}) kB / [0-9]+\.[0-9]{2} sec', line)
if mobj: if mobj:
downloaded_data_len = int(float(mobj.group(1))*1024) downloaded_data_len = int(float(mobj.group(1)) * 1024)
time_now = time.time() time_now = time.time()
speed = self.calc_speed(start, time_now, downloaded_data_len) speed = self.calc_speed(start, time_now, downloaded_data_len)
self.report_progress_live_stream(downloaded_data_len, speed, time_now - start) self.report_progress_live_stream(downloaded_data_len, speed, time_now - start)
@@ -88,7 +88,7 @@ class RtmpFD(FileDownloader):
if not cursor_in_new_line: if not cursor_in_new_line:
self.to_screen('') self.to_screen('')
cursor_in_new_line = True cursor_in_new_line = True
self.to_screen('[rtmpdump] '+line) self.to_screen('[rtmpdump] ' + line)
proc.wait() proc.wait()
if not cursor_in_new_line: if not cursor_in_new_line:
self.to_screen('') self.to_screen('')

View File

@@ -1,3 +1,5 @@
from __future__ import unicode_literals
from .abc import ABCIE from .abc import ABCIE
from .academicearth import AcademicEarthCourseIE from .academicearth import AcademicEarthCourseIE
from .addanime import AddAnimeIE from .addanime import AddAnimeIE
@@ -32,6 +34,7 @@ from .bilibili import BiliBiliIE
from .blinkx import BlinkxIE from .blinkx import BlinkxIE
from .bliptv import BlipTVIE, BlipTVUserIE from .bliptv import BlipTVIE, BlipTVUserIE
from .bloomberg import BloombergIE from .bloomberg import BloombergIE
from .bpb import BpbIE
from .br import BRIE from .br import BRIE
from .breakcom import BreakIE from .breakcom import BreakIE
from .brightcove import BrightcoveIE from .brightcove import BrightcoveIE
@@ -372,6 +375,7 @@ from .syfy import SyfyIE
from .sztvhu import SztvHuIE from .sztvhu import SztvHuIE
from .tagesschau import TagesschauIE from .tagesschau import TagesschauIE
from .tapely import TapelyIE from .tapely import TapelyIE
from .tass import TassIE
from .teachertube import ( from .teachertube import (
TeacherTubeIE, TeacherTubeIE,
TeacherTubeUserIE, TeacherTubeUserIE,
@@ -392,6 +396,7 @@ from .thesixtyone import TheSixtyOneIE
from .thisav import ThisAVIE from .thisav import ThisAVIE
from .tinypic import TinyPicIE from .tinypic import TinyPicIE
from .tlc import TlcIE, TlcDeIE from .tlc import TlcIE, TlcDeIE
from .tmz import TMZIE
from .tnaflix import TNAFlixIE from .tnaflix import TNAFlixIE
from .thvideo import ( from .thvideo import (
THVideoIE, THVideoIE,
@@ -405,6 +410,7 @@ from .trutube import TruTubeIE
from .tube8 import Tube8IE from .tube8 import Tube8IE
from .tudou import TudouIE from .tudou import TudouIE
from .tumblr import TumblrIE from .tumblr import TumblrIE
from .tunein import TuneInIE
from .turbo import TurboIE from .turbo import TurboIE
from .tutv import TutvIE from .tutv import TutvIE
from .tvigle import TvigleIE from .tvigle import TvigleIE
@@ -481,6 +487,7 @@ from .wrzuta import WrzutaIE
from .xbef import XBefIE from .xbef import XBefIE
from .xboxclips import XboxClipsIE from .xboxclips import XboxClipsIE
from .xhamster import XHamsterIE from .xhamster import XHamsterIE
from .xminus import XMinusIE
from .xnxx import XNXXIE from .xnxx import XNXXIE
from .xvideos import XVideosIE from .xvideos import XVideosIE
from .xtube import XTubeUserIE, XTubeIE from .xtube import XTubeUserIE, XTubeIE
@@ -511,6 +518,10 @@ from .youtube import (
YoutubeWatchLaterIE, YoutubeWatchLaterIE,
) )
from .zdf import ZDFIE from .zdf import ZDFIE
from .zingmp3 import (
ZingMp3SongIE,
ZingMp3AlbumIE,
)
_ALL_CLASSES = [ _ALL_CLASSES = [
klass klass
@@ -529,4 +540,4 @@ def gen_extractors():
def get_info_extractor(ie_name): def get_info_extractor(ie_name):
"""Returns the info extractor class with the given ie_name""" """Returns the info extractor class with the given ie_name"""
return globals()[ie_name+'IE'] return globals()[ie_name + 'IE']

View File

@@ -1,4 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -18,15 +19,14 @@ class AcademicEarthCourseIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
m = re.match(self._VALID_URL, url) playlist_id = self._match_id(url)
playlist_id = m.group('id')
webpage = self._download_webpage(url, playlist_id) webpage = self._download_webpage(url, playlist_id)
title = self._html_search_regex( title = self._html_search_regex(
r'<h1 class="playlist-name"[^>]*?>(.*?)</h1>', webpage, u'title') r'<h1 class="playlist-name"[^>]*?>(.*?)</h1>', webpage, 'title')
description = self._html_search_regex( description = self._html_search_regex(
r'<p class="excerpt"[^>]*?>(.*?)</p>', r'<p class="excerpt"[^>]*?>(.*?)</p>',
webpage, u'description', fatal=False) webpage, 'description', fatal=False)
urls = re.findall( urls = re.findall(
r'<li class="lecture-preview">\s*?<a target="_blank" href="([^"]+)">', r'<li class="lecture-preview">\s*?<a target="_blank" href="([^"]+)">',
webpage) webpage)

View File

@@ -15,8 +15,7 @@ from ..utils import (
class AddAnimeIE(InfoExtractor): class AddAnimeIE(InfoExtractor):
_VALID_URL = r'^http://(?:\w+\.)?add-anime\.net/watch_video\.php\?(?:.*?)v=(?P<id>[\w_]+)(?:.*)'
_VALID_URL = r'^http://(?:\w+\.)?add-anime\.net/watch_video\.php\?(?:.*?)v=(?P<video_id>[\w_]+)(?:.*)'
_TEST = { _TEST = {
'url': 'http://www.add-anime.net/watch_video.php?v=24MR3YO5SAS9', 'url': 'http://www.add-anime.net/watch_video.php?v=24MR3YO5SAS9',
'md5': '72954ea10bc979ab5e2eb288b21425a0', 'md5': '72954ea10bc979ab5e2eb288b21425a0',
@@ -29,9 +28,9 @@ class AddAnimeIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url)
try: try:
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('video_id')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
except ExtractorError as ee: except ExtractorError as ee:
if not isinstance(ee.cause, compat_HTTPError) or \ if not isinstance(ee.cause, compat_HTTPError) or \
@@ -49,7 +48,7 @@ class AddAnimeIE(InfoExtractor):
r'a\.value = ([0-9]+)[+]([0-9]+)[*]([0-9]+);', r'a\.value = ([0-9]+)[+]([0-9]+)[*]([0-9]+);',
redir_webpage) redir_webpage)
if av is None: if av is None:
raise ExtractorError(u'Cannot find redirect math task') raise ExtractorError('Cannot find redirect math task')
av_res = int(av.group(1)) + int(av.group(2)) * int(av.group(3)) av_res = int(av.group(1)) + int(av.group(2)) * int(av.group(3))
parsed_url = compat_urllib_parse_urlparse(url) parsed_url = compat_urllib_parse_urlparse(url)

View File

@@ -5,6 +5,7 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
class AdultSwimIE(InfoExtractor): class AdultSwimIE(InfoExtractor):
_VALID_URL = r'https?://video\.adultswim\.com/(?P<path>.+?)(?:\.html)?(?:\?.*)?(?:#.*)?$' _VALID_URL = r'https?://video\.adultswim\.com/(?P<path>.+?)(?:\.html)?(?:\?.*)?(?:#.*)?$'
_TEST = { _TEST = {

View File

@@ -1,5 +1,4 @@
#coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
@@ -26,8 +25,7 @@ class AparatIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
m = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = m.group('id')
# Note: There is an easier-to-parse configuration at # Note: There is an easier-to-parse configuration at
# http://www.aparat.com/video/video/config/videohash/%video_id # http://www.aparat.com/video/video/config/videohash/%video_id
@@ -40,15 +38,15 @@ class AparatIE(InfoExtractor):
for i, video_url in enumerate(video_urls): for i, video_url in enumerate(video_urls):
req = HEADRequest(video_url) req = HEADRequest(video_url)
res = self._request_webpage( res = self._request_webpage(
req, video_id, note=u'Testing video URL %d' % i, errnote=False) req, video_id, note='Testing video URL %d' % i, errnote=False)
if res: if res:
break break
else: else:
raise ExtractorError(u'No working video URLs found') raise ExtractorError('No working video URLs found')
title = self._search_regex(r'\s+title:\s*"([^"]+)"', webpage, u'title') title = self._search_regex(r'\s+title:\s*"([^"]+)"', webpage, 'title')
thumbnail = self._search_regex( thumbnail = self._search_regex(
r'\s+image:\s*"([^"]+)"', webpage, u'thumbnail', fatal=False) r'\s+image:\s*"([^"]+)"', webpage, 'thumbnail', fatal=False)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -70,15 +70,17 @@ class AppleTrailersIE(InfoExtractor):
uploader_id = mobj.group('company') uploader_id = mobj.group('company')
playlist_url = compat_urlparse.urljoin(url, 'includes/playlists/itunes.inc') playlist_url = compat_urlparse.urljoin(url, 'includes/playlists/itunes.inc')
def fix_html(s): def fix_html(s):
s = re.sub(r'(?s)<script[^<]*?>.*?</script>', '', s) s = re.sub(r'(?s)<script[^<]*?>.*?</script>', '', s)
s = re.sub(r'<img ([^<]*?)>', r'<img \1/>', s) s = re.sub(r'<img ([^<]*?)>', r'<img \1/>', s)
# The ' in the onClick attributes are not escaped, it couldn't be parsed # The ' in the onClick attributes are not escaped, it couldn't be parsed
# like: http://trailers.apple.com/trailers/wb/gravity/ # like: http://trailers.apple.com/trailers/wb/gravity/
def _clean_json(m): def _clean_json(m):
return 'iTunes.playURL(%s);' % m.group(1).replace('\'', '&#39;') return 'iTunes.playURL(%s);' % m.group(1).replace('\'', '&#39;')
s = re.sub(self._JSON_RE, _clean_json, s) s = re.sub(self._JSON_RE, _clean_json, s)
s = '<html>' + s + u'</html>' s = '<html>%s</html>' % s
return s return s
doc = self._download_xml(playlist_url, movie, transform_source=fix_html) doc = self._download_xml(playlist_url, movie, transform_source=fix_html)

View File

@@ -192,4 +192,3 @@ class ARDIE(InfoExtractor):
'upload_date': upload_date, 'upload_date': upload_date,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
} }

View File

@@ -12,17 +12,17 @@ class AudiomackIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?audiomack\.com/song/(?P<id>[\w/-]+)' _VALID_URL = r'https?://(?:www\.)?audiomack\.com/song/(?P<id>[\w/-]+)'
IE_NAME = 'audiomack' IE_NAME = 'audiomack'
_TESTS = [ _TESTS = [
#hosted on audiomack # hosted on audiomack
{ {
'url': 'http://www.audiomack.com/song/roosh-williams/extraordinary', 'url': 'http://www.audiomack.com/song/roosh-williams/extraordinary',
'info_dict': 'info_dict':
{ {
'id' : 'roosh-williams/extraordinary', 'id': 'roosh-williams/extraordinary',
'ext': 'mp3', 'ext': 'mp3',
'title': 'Roosh Williams - Extraordinary' 'title': 'Roosh Williams - Extraordinary'
} }
}, },
#hosted on soundcloud via audiomack # hosted on soundcloud via audiomack
{ {
'url': 'http://www.audiomack.com/song/xclusiveszone/take-kare', 'url': 'http://www.audiomack.com/song/xclusiveszone/take-kare',
'file': '172419696.mp3', 'file': '172419696.mp3',
@@ -49,7 +49,7 @@ class AudiomackIE(InfoExtractor):
raise ExtractorError("Unable to deduce api url of song") raise ExtractorError("Unable to deduce api url of song")
realurl = api_response["url"] realurl = api_response["url"]
#Audiomack wraps a lot of soundcloud tracks in their branded wrapper # Audiomack wraps a lot of soundcloud tracks in their branded wrapper
# - if so, pass the work off to the soundcloud extractor # - if so, pass the work off to the soundcloud extractor
if SoundcloudIE.suitable(realurl): if SoundcloudIE.suitable(realurl):
return {'_type': 'url', 'url': realurl, 'ie_key': 'Soundcloud'} return {'_type': 'url', 'url': realurl, 'ie_key': 'Soundcloud'}

View File

@@ -18,7 +18,7 @@ class BambuserIE(InfoExtractor):
_TEST = { _TEST = {
'url': 'http://bambuser.com/v/4050584', 'url': 'http://bambuser.com/v/4050584',
# MD5 seems to be flaky, see https://travis-ci.org/rg3/youtube-dl/jobs/14051016#L388 # MD5 seems to be flaky, see https://travis-ci.org/rg3/youtube-dl/jobs/14051016#L388
#u'md5': 'fba8f7693e48fd4e8641b3fd5539a641', # 'md5': 'fba8f7693e48fd4e8641b3fd5539a641',
'info_dict': { 'info_dict': {
'id': '4050584', 'id': '4050584',
'ext': 'flv', 'ext': 'flv',
@@ -73,7 +73,8 @@ class BambuserChannelIE(InfoExtractor):
urls = [] urls = []
last_id = '' last_id = ''
for i in itertools.count(1): for i in itertools.count(1):
req_url = ('http://bambuser.com/xhr-api/index.php?username={user}' req_url = (
'http://bambuser.com/xhr-api/index.php?username={user}'
'&sort=created&access_mode=0%2C1%2C2&limit={count}' '&sort=created&access_mode=0%2C1%2C2&limit={count}'
'&method=broadcast&format=json&vid_older_than={last}' '&method=broadcast&format=json&vid_older_than={last}'
).format(user=user, count=self._STEP, last=last_id) ).format(user=user, count=self._STEP, last=last_id)

View File

@@ -83,12 +83,12 @@ class BandcampIE(InfoExtractor):
initial_url = mp3_info['url'] initial_url = mp3_info['url']
re_url = r'(?P<server>http://(.*?)\.bandcamp\.com)/download/track\?enc=mp3-320&fsig=(?P<fsig>.*?)&id=(?P<id>.*?)&ts=(?P<ts>.*)$' re_url = r'(?P<server>http://(.*?)\.bandcamp\.com)/download/track\?enc=mp3-320&fsig=(?P<fsig>.*?)&id=(?P<id>.*?)&ts=(?P<ts>.*)$'
m_url = re.match(re_url, initial_url) m_url = re.match(re_url, initial_url)
#We build the url we will use to get the final track url # We build the url we will use to get the final track url
# This url is build in Bandcamp in the script download_bunde_*.js # This url is build in Bandcamp in the script download_bunde_*.js
request_url = '%s/statdownload/track?enc=mp3-320&fsig=%s&id=%s&ts=%s&.rand=665028774616&.vrs=1' % (m_url.group('server'), m_url.group('fsig'), video_id, m_url.group('ts')) request_url = '%s/statdownload/track?enc=mp3-320&fsig=%s&id=%s&ts=%s&.rand=665028774616&.vrs=1' % (m_url.group('server'), m_url.group('fsig'), video_id, m_url.group('ts'))
final_url_webpage = self._download_webpage(request_url, video_id, 'Requesting download url') final_url_webpage = self._download_webpage(request_url, video_id, 'Requesting download url')
# If we could correctly generate the .rand field the url would be # If we could correctly generate the .rand field the url would be
#in the "download_url" key # in the "download_url" key
final_url = re.search(r'"retry_url":"(.*?)"', final_url_webpage).group(1) final_url = re.search(r'"retry_url":"(.*?)"', final_url_webpage).group(1)
return { return {

View File

@@ -1,4 +1,4 @@
#coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor

View File

@@ -0,0 +1,37 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
class BpbIE(InfoExtractor):
IE_DESC = 'Bundeszentrale für politische Bildung'
_VALID_URL = r'http://www\.bpb\.de/mediathek/(?P<id>[0-9]+)/'
_TEST = {
'url': 'http://www.bpb.de/mediathek/297/joachim-gauck-zu-1989-und-die-erinnerung-an-die-ddr',
'md5': '0792086e8e2bfbac9cdf27835d5f2093',
'info_dict': {
'id': '297',
'ext': 'mp4',
'title': 'Joachim Gauck zu 1989 und die Erinnerung an die DDR',
'description': 'Joachim Gauck, erster Beauftragter für die Stasi-Unterlagen, spricht auf dem Geschichtsforum über die friedliche Revolution 1989 und eine "gewisse Traurigkeit" im Umgang mit der DDR-Vergangenheit.'
}
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
title = self._html_search_regex(
r'<h2 class="white">(.*?)</h2>', webpage, 'title')
video_url = self._html_search_regex(
r'(http://film\.bpb\.de/player/dokument_[0-9]+\.mp4)',
webpage, 'video URL')
return {
'id': video_id,
'url': video_url,
'title': title,
'description': self._og_search_description(webpage),
}

View File

@@ -45,4 +45,4 @@ class CBSIE(InfoExtractor):
real_id = self._search_regex( real_id = self._search_regex(
r"video\.settings\.pid\s*=\s*'([^']+)';", r"video\.settings\.pid\s*=\s*'([^']+)';",
webpage, 'real video ID') webpage, 'real video ID')
return self.url_result(u'theplatform:%s' % real_id) return self.url_result('theplatform:%s' % real_id)

View File

@@ -5,6 +5,7 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ExtractorError from ..utils import ExtractorError
class Channel9IE(InfoExtractor): class Channel9IE(InfoExtractor):
''' '''
Common extractor for channel9.msdn.com. Common extractor for channel9.msdn.com.
@@ -31,7 +32,7 @@ class Channel9IE(InfoExtractor):
'session_code': 'KOS002', 'session_code': 'KOS002',
'session_day': 'Day 1', 'session_day': 'Day 1',
'session_room': 'Arena 1A', 'session_room': 'Arena 1A',
'session_speakers': [ 'Ed Blankenship', 'Andrew Coates', 'Brady Gaster', 'Patrick Klug', 'Mads Kristensen' ], 'session_speakers': ['Ed Blankenship', 'Andrew Coates', 'Brady Gaster', 'Patrick Klug', 'Mads Kristensen'],
}, },
}, },
{ {
@@ -44,7 +45,7 @@ class Channel9IE(InfoExtractor):
'description': 'md5:d1e6ecaafa7fb52a2cacdf9599829f5b', 'description': 'md5:d1e6ecaafa7fb52a2cacdf9599829f5b',
'duration': 1540, 'duration': 1540,
'thumbnail': 'http://video.ch9.ms/ch9/87e1/0300391f-a455-4c72-bec3-4422f19287e1/selfservicenuk_512.jpg', 'thumbnail': 'http://video.ch9.ms/ch9/87e1/0300391f-a455-4c72-bec3-4422f19287e1/selfservicenuk_512.jpg',
'authors': [ 'Mike Wilmot' ], 'authors': ['Mike Wilmot'],
}, },
} }
] ]
@@ -187,7 +188,8 @@ class Channel9IE(InfoExtractor):
view_count = self._extract_view_count(html) view_count = self._extract_view_count(html)
comment_count = self._extract_comment_count(html) comment_count = self._extract_comment_count(html)
common = {'_type': 'video', common = {
'_type': 'video',
'id': content_path, 'id': content_path,
'description': description, 'description': description,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
@@ -202,17 +204,17 @@ class Channel9IE(InfoExtractor):
if slides is not None: if slides is not None:
d = common.copy() d = common.copy()
d.update({ 'title': title + '-Slides', 'url': slides }) d.update({'title': title + '-Slides', 'url': slides})
result.append(d) result.append(d)
if zip_ is not None: if zip_ is not None:
d = common.copy() d = common.copy()
d.update({ 'title': title + '-Zip', 'url': zip_ }) d.update({'title': title + '-Zip', 'url': zip_})
result.append(d) result.append(d)
if len(formats) > 0: if len(formats) > 0:
d = common.copy() d = common.copy()
d.update({ 'title': title, 'formats': formats }) d.update({'title': title, 'formats': formats})
result.append(d) result.append(d)
return result return result

View File

@@ -77,7 +77,7 @@ class CinemassacreIE(InfoExtractor):
if videolist_url: if videolist_url:
videolist = self._download_xml(videolist_url, video_id, 'Downloading videolist XML') videolist = self._download_xml(videolist_url, video_id, 'Downloading videolist XML')
formats = [] formats = []
baseurl = vidurl[:vidurl.rfind('/')+1] baseurl = vidurl[:vidurl.rfind('/') + 1]
for video in videolist.findall('.//video'): for video in videolist.findall('.//video'):
src = video.get('src') src = video.get('src')
if not src: if not src:

View File

@@ -24,7 +24,7 @@ class ClipfishIE(InfoExtractor):
'title': 'FIFA 14 - E3 2013 Trailer', 'title': 'FIFA 14 - E3 2013 Trailer',
'duration': 82, 'duration': 82,
}, },
u'skip': 'Blocked in the US' 'skip': 'Blocked in the US'
} }
def _real_extract(self, url): def _real_extract(self, url):
@@ -34,7 +34,7 @@ class ClipfishIE(InfoExtractor):
info_url = ('http://www.clipfish.de/devxml/videoinfo/%s?ts=%d' % info_url = ('http://www.clipfish.de/devxml/videoinfo/%s?ts=%d' %
(video_id, int(time.time()))) (video_id, int(time.time())))
doc = self._download_xml( doc = self._download_xml(
info_url, video_id, note=u'Downloading info page') info_url, video_id, note='Downloading info page')
title = doc.find('title').text title = doc.find('title').text
video_url = doc.find('filename').text video_url = doc.find('filename').text
if video_url is None: if video_url is None:

View File

@@ -39,6 +39,7 @@ class ClipsyndicateIE(InfoExtractor):
transform_source=fix_xml_ampersands) transform_source=fix_xml_ampersands)
track_doc = pdoc.find('trackList/track') track_doc = pdoc.find('trackList/track')
def find_param(name): def find_param(name):
node = find_xpath_attr(track_doc, './/param', 'name', name) node = find_xpath_attr(track_doc, './/param', 'name', name)
if node is not None: if node is not None:

View File

@@ -25,8 +25,7 @@ class CNNIE(InfoExtractor):
'duration': 135, 'duration': 135,
'upload_date': '20130609', 'upload_date': '20130609',
}, },
}, }, {
{
"url": "http://edition.cnn.com/video/?/video/us/2013/08/21/sot-student-gives-epic-speech.georgia-institute-of-technology&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+rss%2Fcnn_topstories+%28RSS%3A+Top+Stories%29", "url": "http://edition.cnn.com/video/?/video/us/2013/08/21/sot-student-gives-epic-speech.georgia-institute-of-technology&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+rss%2Fcnn_topstories+%28RSS%3A+Top+Stories%29",
"md5": "b5cc60c60a3477d185af8f19a2a26f4e", "md5": "b5cc60c60a3477d185af8f19a2a26f4e",
"info_dict": { "info_dict": {

View File

@@ -10,7 +10,8 @@ from ..utils import int_or_none
class CollegeHumorIE(InfoExtractor): class CollegeHumorIE(InfoExtractor):
_VALID_URL = r'^(?:https?://)?(?:www\.)?collegehumor\.com/(video|embed|e)/(?P<videoid>[0-9]+)/?(?P<shorttitle>.*)$' _VALID_URL = r'^(?:https?://)?(?:www\.)?collegehumor\.com/(video|embed|e)/(?P<videoid>[0-9]+)/?(?P<shorttitle>.*)$'
_TESTS = [{ _TESTS = [
{
'url': 'http://www.collegehumor.com/video/6902724/comic-con-cosplay-catastrophe', 'url': 'http://www.collegehumor.com/video/6902724/comic-con-cosplay-catastrophe',
'md5': 'dcc0f5c1c8be98dc33889a191f4c26bd', 'md5': 'dcc0f5c1c8be98dc33889a191f4c26bd',
'info_dict': { 'info_dict': {
@@ -21,8 +22,7 @@ class CollegeHumorIE(InfoExtractor):
'age_limit': 13, 'age_limit': 13,
'duration': 187, 'duration': 187,
}, },
}, }, {
{
'url': 'http://www.collegehumor.com/video/3505939/font-conference', 'url': 'http://www.collegehumor.com/video/3505939/font-conference',
'md5': '72fa701d8ef38664a4dbb9e2ab721816', 'md5': '72fa701d8ef38664a4dbb9e2ab721816',
'info_dict': { 'info_dict': {
@@ -33,9 +33,8 @@ class CollegeHumorIE(InfoExtractor):
'age_limit': 10, 'age_limit': 10,
'duration': 179, 'duration': 179,
}, },
}, }, {
# embedded youtube video # embedded youtube video
{
'url': 'http://www.collegehumor.com/embed/6950306', 'url': 'http://www.collegehumor.com/embed/6950306',
'info_dict': { 'info_dict': {
'id': 'Z-bao9fg6Yc', 'id': 'Z-bao9fg6Yc',

View File

@@ -296,9 +296,11 @@ class InfoExtractor(object):
content = self._webpage_read_content(urlh, url_or_request, video_id, note, errnote, fatal) content = self._webpage_read_content(urlh, url_or_request, video_id, note, errnote, fatal)
return (content, urlh) return (content, urlh)
def _webpage_read_content(self, urlh, url_or_request, video_id, note=None, errnote=None, fatal=True): def _webpage_read_content(self, urlh, url_or_request, video_id, note=None, errnote=None, fatal=True, prefix=None):
content_type = urlh.headers.get('Content-Type', '') content_type = urlh.headers.get('Content-Type', '')
webpage_bytes = urlh.read() webpage_bytes = urlh.read()
if prefix is not None:
webpage_bytes = prefix + webpage_bytes
m = re.match(r'[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+\s*;\s*charset=(.+)', content_type) m = re.match(r'[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+\s*;\s*charset=(.+)', content_type)
if m: if m:
encoding = m.group(1) encoding = m.group(1)
@@ -423,17 +425,18 @@ class InfoExtractor(object):
"""Report attempt to log in.""" """Report attempt to log in."""
self.to_screen('Logging in') self.to_screen('Logging in')
#Methods for following #608 # Methods for following #608
@staticmethod @staticmethod
def url_result(url, ie=None, video_id=None): def url_result(url, ie=None, video_id=None):
"""Returns a url that points to a page that should be processed""" """Returns a url that points to a page that should be processed"""
#TODO: ie should be the class used for getting the info # TODO: ie should be the class used for getting the info
video_info = {'_type': 'url', video_info = {'_type': 'url',
'url': url, 'url': url,
'ie_key': ie} 'ie_key': ie}
if video_id is not None: if video_id is not None:
video_info['id'] = video_id video_info['id'] = video_id
return video_info return video_info
@staticmethod @staticmethod
def playlist_result(entries, playlist_id=None, playlist_title=None): def playlist_result(entries, playlist_id=None, playlist_title=None):
"""Returns a playlist""" """Returns a playlist"""

View File

@@ -54,7 +54,7 @@ class CrackedIE(InfoExtractor):
return { return {
'id': video_id, 'id': video_id,
'url':video_url, 'url': video_url,
'title': title, 'title': title,
'description': description, 'description': description,
'timestamp': timestamp, 'timestamp': timestamp,

View File

@@ -69,11 +69,9 @@ class CrunchyrollIE(SubtitlesInfoExtractor):
login_request.add_header('Content-Type', 'application/x-www-form-urlencoded') login_request.add_header('Content-Type', 'application/x-www-form-urlencoded')
self._download_webpage(login_request, None, False, 'Wrong login info') self._download_webpage(login_request, None, False, 'Wrong login info')
def _real_initialize(self): def _real_initialize(self):
self._login() self._login()
def _decrypt_subtitles(self, data, iv, id): def _decrypt_subtitles(self, data, iv, id):
data = bytes_to_intlist(data) data = bytes_to_intlist(data)
iv = bytes_to_intlist(iv) iv = bytes_to_intlist(iv)
@@ -99,8 +97,10 @@ class CrunchyrollIE(SubtitlesInfoExtractor):
return shaHash + [0] * 12 return shaHash + [0] * 12
key = obfuscate_key(id) key = obfuscate_key(id)
class Counter: class Counter:
__value = iv __value = iv
def next_value(self): def next_value(self):
temp = self.__value temp = self.__value
self.__value = inc(self.__value) self.__value = inc(self.__value)
@@ -183,7 +183,7 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
return output return output
def _real_extract(self,url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('video_id') video_id = mobj.group('video_id')
@@ -226,10 +226,10 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
formats = [] formats = []
for fmt in re.findall(r'\?p([0-9]{3,4})=1', webpage): for fmt in re.findall(r'\?p([0-9]{3,4})=1', webpage):
stream_quality, stream_format = self._FORMAT_IDS[fmt] stream_quality, stream_format = self._FORMAT_IDS[fmt]
video_format = fmt+'p' video_format = fmt + 'p'
streamdata_req = compat_urllib_request.Request('http://www.crunchyroll.com/xml/') streamdata_req = compat_urllib_request.Request('http://www.crunchyroll.com/xml/')
# urlencode doesn't work! # urlencode doesn't work!
streamdata_req.data = 'req=RpcApiVideoEncode%5FGetStreamInfo&video%5Fencode%5Fquality='+stream_quality+'&media%5Fid='+stream_id+'&video%5Fformat='+stream_format streamdata_req.data = 'req=RpcApiVideoEncode%5FGetStreamInfo&video%5Fencode%5Fquality=' + stream_quality + '&media%5Fid=' + stream_id + '&video%5Fformat=' + stream_format
streamdata_req.add_header('Content-Type', 'application/x-www-form-urlencoded') streamdata_req.add_header('Content-Type', 'application/x-www-form-urlencoded')
streamdata_req.add_header('Content-Length', str(len(streamdata_req.data))) streamdata_req.add_header('Content-Length', str(len(streamdata_req.data)))
streamdata = self._download_xml( streamdata = self._download_xml(
@@ -248,8 +248,9 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
subtitles = {} subtitles = {}
sub_format = self._downloader.params.get('subtitlesformat', 'srt') sub_format = self._downloader.params.get('subtitlesformat', 'srt')
for sub_id, sub_name in re.findall(r'\?ssid=([0-9]+)" title="([^"]+)', webpage): for sub_id, sub_name in re.findall(r'\?ssid=([0-9]+)" title="([^"]+)', webpage):
sub_page = self._download_webpage('http://www.crunchyroll.com/xml/?req=RpcApiSubtitle_GetXml&subtitle_script_id='+sub_id,\ sub_page = self._download_webpage(
video_id, note='Downloading subtitles for '+sub_name) 'http://www.crunchyroll.com/xml/?req=RpcApiSubtitle_GetXml&subtitle_script_id=' + sub_id,
video_id, note='Downloading subtitles for ' + sub_name)
id = self._search_regex(r'id=\'([0-9]+)', sub_page, 'subtitle_id', fatal=False) id = self._search_regex(r'id=\'([0-9]+)', sub_page, 'subtitle_id', fatal=False)
iv = self._search_regex(r'<iv>([^<]+)', sub_page, 'subtitle_iv', fatal=False) iv = self._search_regex(r'<iv>([^<]+)', sub_page, 'subtitle_iv', fatal=False)
data = self._search_regex(r'<data>([^<]+)', sub_page, 'subtitle_data', fatal=False) data = self._search_regex(r'<data>([^<]+)', sub_page, 'subtitle_data', fatal=False)

View File

@@ -1,4 +1,4 @@
#coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
@@ -18,6 +18,7 @@ from ..utils import (
unescapeHTML, unescapeHTML,
) )
class DailymotionBaseInfoExtractor(InfoExtractor): class DailymotionBaseInfoExtractor(InfoExtractor):
@staticmethod @staticmethod
def _build_request(url): def _build_request(url):
@@ -27,6 +28,7 @@ class DailymotionBaseInfoExtractor(InfoExtractor):
request.add_header('Cookie', 'ff=off') request.add_header('Cookie', 'ff=off')
return request return request
class DailymotionIE(DailymotionBaseInfoExtractor, SubtitlesInfoExtractor): class DailymotionIE(DailymotionBaseInfoExtractor, SubtitlesInfoExtractor):
"""Information Extractor for Dailymotion""" """Information Extractor for Dailymotion"""

View File

@@ -27,7 +27,7 @@ class DotsubIE(InfoExtractor):
video_id = mobj.group('id') video_id = mobj.group('id')
info_url = "https://dotsub.com/api/media/%s/metadata" % video_id info_url = "https://dotsub.com/api/media/%s/metadata" % video_id
info = self._download_json(info_url, video_id) info = self._download_json(info_url, video_id)
date = time.gmtime(info['dateCreated']/1000) # The timestamp is in miliseconds date = time.gmtime(info['dateCreated'] / 1000) # The timestamp is in miliseconds
return { return {
'id': video_id, 'id': video_id,

View File

@@ -11,15 +11,15 @@ from ..utils import url_basename
class DropboxIE(InfoExtractor): class DropboxIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?dropbox[.]com/sh?/(?P<id>[a-zA-Z0-9]{15})/.*' _VALID_URL = r'https?://(?:www\.)?dropbox[.]com/sh?/(?P<id>[a-zA-Z0-9]{15})/.*'
_TESTS = [{ _TESTS = [
{
'url': 'https://www.dropbox.com/s/nelirfsxnmcfbfh/youtube-dl%20test%20video%20%27%C3%A4%22BaW_jenozKc.mp4?dl=0', 'url': 'https://www.dropbox.com/s/nelirfsxnmcfbfh/youtube-dl%20test%20video%20%27%C3%A4%22BaW_jenozKc.mp4?dl=0',
'info_dict': { 'info_dict': {
'id': 'nelirfsxnmcfbfh', 'id': 'nelirfsxnmcfbfh',
'ext': 'mp4', 'ext': 'mp4',
'title': 'youtube-dl test video \'ä"BaW_jenozKc' 'title': 'youtube-dl test video \'ä"BaW_jenozKc'
} }
}, }, {
{
'url': 'https://www.dropbox.com/sh/662glsejgzoj9sr/AAByil3FGH9KFNZ13e08eSa1a/Pregame%20Ceremony%20Program%20PA%2020140518.m4v', 'url': 'https://www.dropbox.com/sh/662glsejgzoj9sr/AAByil3FGH9KFNZ13e08eSa1a/Pregame%20Ceremony%20Program%20PA%2020140518.m4v',
'only_matching': True, 'only_matching': True,
}, },

View File

@@ -125,7 +125,7 @@ class EightTracksIE(InfoExtractor):
info = { info = {
'id': compat_str(track_data['id']), 'id': compat_str(track_data['id']),
'url': track_data['track_file_stream_url'], 'url': track_data['track_file_stream_url'],
'title': track_data['performer'] + u' - ' + track_data['name'], 'title': track_data['performer'] + ' - ' + track_data['name'],
'raw_title': track_data['name'], 'raw_title': track_data['name'],
'uploader_id': data['user']['login'], 'uploader_id': data['user']['login'],
'ext': 'm4a', 'ext': 'm4a',

View File

@@ -40,7 +40,7 @@ class FC2IE(InfoExtractor):
info_url = ( info_url = (
"http://video.fc2.com/ginfo.php?mimi={1:s}&href={2:s}&v={0:s}&fversion=WIN%2011%2C6%2C602%2C180&from=2&otag=0&upid={0:s}&tk=null&". "http://video.fc2.com/ginfo.php?mimi={1:s}&href={2:s}&v={0:s}&fversion=WIN%2011%2C6%2C602%2C180&from=2&otag=0&upid={0:s}&tk=null&".
format(video_id, mimi, compat_urllib_request.quote(refer, safe='').replace('.','%2E'))) format(video_id, mimi, compat_urllib_request.quote(refer, safe='').replace('.', '%2E')))
info_webpage = self._download_webpage( info_webpage = self._download_webpage(
info_url, video_id, note='Downloading info page') info_url, video_id, note='Downloading info page')

View File

@@ -26,6 +26,19 @@ class FranceTVBaseInfoExtractor(InfoExtractor):
if info.get('status') == 'NOK': if info.get('status') == 'NOK':
raise ExtractorError( raise ExtractorError(
'%s returned error: %s' % (self.IE_NAME, info['message']), expected=True) '%s returned error: %s' % (self.IE_NAME, info['message']), expected=True)
allowed_countries = info['videos'][0].get('geoblocage')
if allowed_countries:
georestricted = True
geo_info = self._download_json(
'http://geo.francetv.fr/ws/edgescape.json', video_id,
'Downloading geo restriction info')
country = geo_info['reponse']['geo_info']['country_code']
if country not in allowed_countries:
raise ExtractorError(
'The video is not available from your location',
expected=True)
else:
georestricted = False
formats = [] formats = []
for video in info['videos']: for video in info['videos']:
@@ -36,6 +49,10 @@ class FranceTVBaseInfoExtractor(InfoExtractor):
continue continue
format_id = video['format'] format_id = video['format']
if video_url.endswith('.f4m'): if video_url.endswith('.f4m'):
if georestricted:
# See https://github.com/rg3/youtube-dl/issues/3963
# m3u8 urls work fine
continue
video_url_parsed = compat_urllib_parse_urlparse(video_url) video_url_parsed = compat_urllib_parse_urlparse(video_url)
f4m_url = self._download_webpage( f4m_url = self._download_webpage(
'http://hdfauth.francetv.fr/esi/urltokengen2.html?url=%s' % video_url_parsed.path, 'http://hdfauth.francetv.fr/esi/urltokengen2.html?url=%s' % video_url_parsed.path,

View File

@@ -11,7 +11,7 @@ class GamekingsIE(InfoExtractor):
'url': 'http://www.gamekings.tv/videos/phoenix-wright-ace-attorney-dual-destinies-review/', 'url': 'http://www.gamekings.tv/videos/phoenix-wright-ace-attorney-dual-destinies-review/',
# MD5 is flaky, seems to change regularly # MD5 is flaky, seems to change regularly
# 'md5': '2f32b1f7b80fdc5cb616efb4f387f8a3', # 'md5': '2f32b1f7b80fdc5cb616efb4f387f8a3',
u'info_dict': { 'info_dict': {
'id': '20130811', 'id': '20130811',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Phoenix Wright: Ace Attorney \u2013 Dual Destinies Review', 'title': 'Phoenix Wright: Ace Attorney \u2013 Dual Destinies Review',

View File

@@ -445,6 +445,30 @@ class GenericIE(InfoExtractor):
'title': 'Rosetta #CometLanding webcast HL 10', 'title': 'Rosetta #CometLanding webcast HL 10',
} }
}, },
# LazyYT
{
'url': 'http://discourse.ubuntu.com/t/unity-8-desktop-mode-windows-on-mir/1986',
'info_dict': {
'title': 'Unity 8 desktop-mode windows on Mir! - Ubuntu Discourse',
},
'playlist_mincount': 2,
},
# Direct link with incorrect MIME type
{
'url': 'http://ftp.nluug.nl/video/nluug/2014-11-20_nj14/zaal-2/5_Lennart_Poettering_-_Systemd.webm',
'md5': '4ccbebe5f36706d85221f204d7eb5913',
'info_dict': {
'url': 'http://ftp.nluug.nl/video/nluug/2014-11-20_nj14/zaal-2/5_Lennart_Poettering_-_Systemd.webm',
'id': '5_Lennart_Poettering_-_Systemd',
'ext': 'webm',
'title': '5_Lennart_Poettering_-_Systemd',
'upload_date': '20141120',
},
'expected_warnings': [
'URL could be a direct video link, returning it as such.'
]
}
] ]
def report_following_redirect(self, new_url): def report_following_redirect(self, new_url):
@@ -537,9 +561,9 @@ class GenericIE(InfoExtractor):
if default_search in ('error', 'fixup_error'): if default_search in ('error', 'fixup_error'):
raise ExtractorError( raise ExtractorError(
('%r is not a valid URL. ' '%r is not a valid URL. '
'Set --default-search "ytsearch" (or run youtube-dl "ytsearch:%s" ) to search YouTube' 'Set --default-search "ytsearch" (or run youtube-dl "ytsearch:%s" ) to search YouTube'
) % (url, url), expected=True) % (url, url), expected=True)
else: else:
if ':' not in default_search: if ':' not in default_search:
default_search += ':' default_search += ':'
@@ -598,10 +622,28 @@ class GenericIE(InfoExtractor):
if not self._downloader.params.get('test', False) and not is_intentional: if not self._downloader.params.get('test', False) and not is_intentional:
self._downloader.report_warning('Falling back on generic information extractor.') self._downloader.report_warning('Falling back on generic information extractor.')
if full_response: if not full_response:
webpage = self._webpage_read_content(full_response, url, video_id) full_response = self._request_webpage(url, video_id)
else:
webpage = self._download_webpage(url, video_id) # Maybe it's a direct link to a video?
# Be careful not to download the whole thing!
first_bytes = full_response.read(512)
if not re.match(r'^\s*<', first_bytes.decode('utf-8', 'replace')):
self._downloader.report_warning(
'URL could be a direct video link, returning it as such.')
upload_date = unified_strdate(
head_response.headers.get('Last-Modified'))
return {
'id': video_id,
'title': os.path.splitext(url_basename(url))[0],
'direct': True,
'url': url,
'upload_date': upload_date,
}
webpage = self._webpage_read_content(
full_response, url, video_id, prefix=first_bytes)
self.report_extraction(video_id) self.report_extraction(video_id)
# Is it an RSS feed? # Is it an RSS feed?
@@ -702,6 +744,12 @@ class GenericIE(InfoExtractor):
return _playlist_from_matches( return _playlist_from_matches(
matches, lambda m: unescapeHTML(m[1])) matches, lambda m: unescapeHTML(m[1]))
# Look for lazyYT YouTube embed
matches = re.findall(
r'class="lazyYT" data-youtube-id="([^"]+)"', webpage)
if matches:
return _playlist_from_matches(matches, lambda m: unescapeHTML(m))
# Look for embedded Dailymotion player # Look for embedded Dailymotion player
matches = re.findall( matches = re.findall(
r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?dailymotion\.com/embed/video/.+?)\1', webpage) r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?dailymotion\.com/embed/video/.+?)\1', webpage)
@@ -748,7 +796,7 @@ class GenericIE(InfoExtractor):
# Look for embedded blip.tv player # Look for embedded blip.tv player
mobj = re.search(r'<meta\s[^>]*https?://api\.blip\.tv/\w+/redirect/\w+/(\d+)', webpage) mobj = re.search(r'<meta\s[^>]*https?://api\.blip\.tv/\w+/redirect/\w+/(\d+)', webpage)
if mobj: if mobj:
return self.url_result('http://blip.tv/a/a-'+mobj.group(1), 'BlipTV') return self.url_result('http://blip.tv/a/a-' + mobj.group(1), 'BlipTV')
mobj = re.search(r'<(?:iframe|embed|object)\s[^>]*(https?://(?:\w+\.)?blip\.tv/(?:play/|api\.swf#)[a-zA-Z0-9_]+)', webpage) mobj = re.search(r'<(?:iframe|embed|object)\s[^>]*(https?://(?:\w+\.)?blip\.tv/(?:play/|api\.swf#)[a-zA-Z0-9_]+)', webpage)
if mobj: if mobj:
return self.url_result(mobj.group(1), 'BlipTV') return self.url_result(mobj.group(1), 'BlipTV')
@@ -1025,4 +1073,3 @@ class GenericIE(InfoExtractor):
'_type': 'playlist', '_type': 'playlist',
'entries': entries, 'entries': entries,
} }

View File

@@ -9,14 +9,15 @@ from ..utils import (
determine_ext, determine_ext,
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
int_or_none,
) )
class GorillaVidIE(InfoExtractor): class GorillaVidIE(InfoExtractor):
IE_DESC = 'GorillaVid.in, daclips.in and movpod.in' IE_DESC = 'GorillaVid.in, daclips.in, movpod.in and fastvideo.in'
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
https?://(?P<host>(?:www\.)? https?://(?P<host>(?:www\.)?
(?:daclips\.in|gorillavid\.in|movpod\.in))/ (?:daclips\.in|gorillavid\.in|movpod\.in|fastvideo\.in))/
(?:embed-)?(?P<id>[0-9a-zA-Z]+)(?:-[0-9]+x[0-9]+\.html)? (?:embed-)?(?P<id>[0-9a-zA-Z]+)(?:-[0-9]+x[0-9]+\.html)?
''' '''
@@ -49,6 +50,16 @@ class GorillaVidIE(InfoExtractor):
'title': 'Micro Pig piglets ready on 16th July 2009-bG0PdrCdxUc', 'title': 'Micro Pig piglets ready on 16th July 2009-bG0PdrCdxUc',
'thumbnail': 're:http://.*\.jpg', 'thumbnail': 're:http://.*\.jpg',
} }
}, {
# video with countdown timeout
'url': 'http://fastvideo.in/1qmdn1lmsmbw',
'md5': '8b87ec3f6564a3108a0e8e66594842ba',
'info_dict': {
'id': '1qmdn1lmsmbw',
'ext': 'mp4',
'title': 'Man of Steel - Trailer',
'thumbnail': 're:http://.*\.jpg',
},
}, { }, {
'url': 'http://movpod.in/0wguyyxi1yca', 'url': 'http://movpod.in/0wguyyxi1yca',
'only_matching': True, 'only_matching': True,
@@ -71,6 +82,12 @@ class GorillaVidIE(InfoExtractor):
''', webpage)) ''', webpage))
if fields['op'] == 'download1': if fields['op'] == 'download1':
countdown = int_or_none(self._search_regex(
r'<span id="countdown_str">(?:[Ww]ait)?\s*<span id="cxc">(\d+)</span>\s*(?:seconds?)?</span>',
webpage, 'countdown', default=None))
if countdown:
self._sleep(countdown, video_id)
post = compat_urllib_parse.urlencode(fields) post = compat_urllib_parse.urlencode(fields)
req = compat_urllib_request.Request(url, post) req = compat_urllib_request.Request(url, post)
@@ -78,9 +95,13 @@ class GorillaVidIE(InfoExtractor):
webpage = self._download_webpage(req, video_id, 'Downloading video page') webpage = self._download_webpage(req, video_id, 'Downloading video page')
title = self._search_regex(r'style="z-index: [0-9]+;">([^<]+)</span>', webpage, 'title') title = self._search_regex(
video_url = self._search_regex(r'file\s*:\s*\'(http[^\']+)\',', webpage, 'file url') r'style="z-index: [0-9]+;">([^<]+)</span>',
thumbnail = self._search_regex(r'image\s*:\s*\'(http[^\']+)\',', webpage, 'thumbnail', fatal=False) webpage, 'title', default=None) or self._og_search_title(webpage)
video_url = self._search_regex(
r'file\s*:\s*["\'](http[^"\']+)["\'],', webpage, 'file url')
thumbnail = self._search_regex(
r'image\s*:\s*["\'](http[^"\']+)["\'],', webpage, 'thumbnail', fatal=False)
formats = [{ formats = [{
'format_id': 'sd', 'format_id': 'sd',

View File

@@ -1,12 +1,13 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
import base64 import base64
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_urllib_parse, compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
)
from ..utils import (
ExtractorError, ExtractorError,
HEADRequest, HEADRequest,
) )
@@ -16,25 +17,24 @@ class HotNewHipHopIE(InfoExtractor):
_VALID_URL = r'http://www\.hotnewhiphop\.com/.*\.(?P<id>.*)\.html' _VALID_URL = r'http://www\.hotnewhiphop\.com/.*\.(?P<id>.*)\.html'
_TEST = { _TEST = {
'url': 'http://www.hotnewhiphop.com/freddie-gibbs-lay-it-down-song.1435540.html', 'url': 'http://www.hotnewhiphop.com/freddie-gibbs-lay-it-down-song.1435540.html',
'file': '1435540.mp3',
'md5': '2c2cd2f76ef11a9b3b581e8b232f3d96', 'md5': '2c2cd2f76ef11a9b3b581e8b232f3d96',
'info_dict': { 'info_dict': {
'id': '1435540',
'ext': 'mp3',
'title': 'Freddie Gibbs - Lay It Down' 'title': 'Freddie Gibbs - Lay It Down'
} }
} }
def _real_extract(self, url): def _real_extract(self, url):
m = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = m.group('id') webpage = self._download_webpage(url, video_id)
webpage_src = self._download_webpage(url, video_id)
video_url_base64 = self._search_regex( video_url_base64 = self._search_regex(
r'data-path="(.*?)"', webpage_src, u'video URL', fatal=False) r'data-path="(.*?)"', webpage, 'video URL', default=None)
if video_url_base64 is None: if video_url_base64 is None:
video_url = self._search_regex( video_url = self._search_regex(
r'"contentUrl" content="(.*?)"', webpage_src, u'video URL') r'"contentUrl" content="(.*?)"', webpage, 'content URL')
return self.url_result(video_url, ie='Youtube') return self.url_result(video_url, ie='Youtube')
reqdata = compat_urllib_parse.urlencode([ reqdata = compat_urllib_parse.urlencode([
@@ -59,11 +59,11 @@ class HotNewHipHopIE(InfoExtractor):
if video_url.endswith('.html'): if video_url.endswith('.html'):
raise ExtractorError('Redirect failed') raise ExtractorError('Redirect failed')
video_title = self._og_search_title(webpage_src).strip() video_title = self._og_search_title(webpage).strip()
return { return {
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'title': video_title, 'title': video_title,
'thumbnail': self._og_search_thumbnail(webpage_src), 'thumbnail': self._og_search_thumbnail(webpage),
} }

View File

@@ -63,8 +63,10 @@ class IGNIE(InfoExtractor):
'id': '078fdd005f6d3c02f63d795faa1b984f', 'id': '078fdd005f6d3c02f63d795faa1b984f',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Rewind Theater - Wild Trailer Gamescom 2014', 'title': 'Rewind Theater - Wild Trailer Gamescom 2014',
'description': 'Giant skeletons, bloody hunts, and captivating' 'description': (
' natural beauty take our breath away.', 'Giant skeletons, bloody hunts, and captivating'
' natural beauty take our breath away.'
),
}, },
}, },
] ]

View File

@@ -32,7 +32,7 @@ class InternetVideoArchiveIE(InfoExtractor):
def _clean_query(query): def _clean_query(query):
NEEDED_ARGS = ['publishedid', 'customerid'] NEEDED_ARGS = ['publishedid', 'customerid']
query_dic = compat_urlparse.parse_qs(query) query_dic = compat_urlparse.parse_qs(query)
cleaned_dic = dict((k,v[0]) for (k,v) in query_dic.items() if k in NEEDED_ARGS) cleaned_dic = dict((k, v[0]) for (k, v) in query_dic.items() if k in NEEDED_ARGS)
# Other player ids return m3u8 urls # Other player ids return m3u8 urls
cleaned_dic['playerid'] = '247' cleaned_dic['playerid'] = '247'
cleaned_dic['videokbrate'] = '100000' cleaned_dic['videokbrate'] = '100000'
@@ -58,9 +58,13 @@ class InternetVideoArchiveIE(InfoExtractor):
item = info.find('channel/item') item = info.find('channel/item')
def _bp(p): def _bp(p):
return xpath_with_ns(p, return xpath_with_ns(
{'media': 'http://search.yahoo.com/mrss/', p,
'jwplayer': 'http://developer.longtailvideo.com/trac/wiki/FlashFormats'}) {
'media': 'http://search.yahoo.com/mrss/',
'jwplayer': 'http://developer.longtailvideo.com/trac/wiki/FlashFormats',
}
)
formats = [] formats = []
for content in item.findall(_bp('media:group/media:content')): for content in item.findall(_bp('media:group/media:content')):
attr = content.attrib attr = content.attrib

View File

@@ -54,7 +54,7 @@ class IPrimaIE(InfoExtractor):
player_url = ( player_url = (
'http://embed.livebox.cz/iprimaplay/player-embed-v2.js?__tok%s__=%s' % 'http://embed.livebox.cz/iprimaplay/player-embed-v2.js?__tok%s__=%s' %
(floor(random()*1073741824), floor(random()*1073741824)) (floor(random() * 1073741824), floor(random() * 1073741824))
) )
req = compat_urllib_request.Request(player_url) req = compat_urllib_request.Request(player_url)

View File

@@ -45,4 +45,3 @@ class JadoreCettePubIE(InfoExtractor):
'title': title, 'title': title,
'description': description, 'description': description,
} }

View File

@@ -13,8 +13,10 @@ class KickStarterIE(InfoExtractor):
'id': '1404461844', 'id': '1404461844',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Intersection: The Story of Josh Grant by Kyle Cowling', 'title': 'Intersection: The Story of Josh Grant by Kyle Cowling',
'description': 'A unique motocross documentary that examines the ' 'description': (
'life and mind of one of sports most elite athletes: Josh Grant.', 'A unique motocross documentary that examines the '
'life and mind of one of sports most elite athletes: Josh Grant.'
),
}, },
}, { }, {
'note': 'Embedded video (not using the native kickstarter video service)', 'note': 'Embedded video (not using the native kickstarter video service)',

View File

@@ -30,4 +30,3 @@ class Ku6IE(InfoExtractor):
'title': title, 'title': title,
'url': downloadUrl 'url': downloadUrl
} }

View File

@@ -75,4 +75,3 @@ class Laola1TvIE(InfoExtractor):
'categories': categories, 'categories': categories,
'ext': 'mp4', 'ext': 'mp4',
} }

View File

@@ -52,7 +52,7 @@ class LifeNewsIE(InfoExtractor):
r'<div class=\'comments\'>\s*<span class=\'counter\'>(\d+)</span>', webpage, 'comment count', fatal=False) r'<div class=\'comments\'>\s*<span class=\'counter\'>(\d+)</span>', webpage, 'comment count', fatal=False)
upload_date = self._html_search_regex( upload_date = self._html_search_regex(
r'<time datetime=\'([^\']+)\'>', webpage, 'upload date',fatal=False) r'<time datetime=\'([^\']+)\'>', webpage, 'upload date', fatal=False)
if upload_date is not None: if upload_date is not None:
upload_date = unified_strdate(upload_date) upload_date = unified_strdate(upload_date)
@@ -71,4 +71,4 @@ class LifeNewsIE(InfoExtractor):
if len(videos) == 1: if len(videos) == 1:
return make_entry(video_id, videos[0]) return make_entry(video_id, videos[0])
else: else:
return [make_entry(video_id, media, video_number+1) for video_number, media in enumerate(videos)] return [make_entry(video_id, media, video_number + 1) for video_number, media in enumerate(videos)]

View File

@@ -19,8 +19,7 @@ class LiveLeakIE(InfoExtractor):
'uploader': 'ljfriel2', 'uploader': 'ljfriel2',
'title': 'Most unlucky car accident' 'title': 'Most unlucky car accident'
} }
}, }, {
{
'url': 'http://www.liveleak.com/view?i=f93_1390833151', 'url': 'http://www.liveleak.com/view?i=f93_1390833151',
'md5': 'd3f1367d14cc3c15bf24fbfbe04b9abf', 'md5': 'd3f1367d14cc3c15bf24fbfbe04b9abf',
'info_dict': { 'info_dict': {
@@ -30,8 +29,7 @@ class LiveLeakIE(InfoExtractor):
'uploader': 'ARD_Stinkt', 'uploader': 'ARD_Stinkt',
'title': 'German Television does first Edward Snowden Interview (ENGLISH)', 'title': 'German Television does first Edward Snowden Interview (ENGLISH)',
} }
}, }, {
{
'url': 'http://www.liveleak.com/view?i=4f7_1392687779', 'url': 'http://www.liveleak.com/view?i=4f7_1392687779',
'md5': '42c6d97d54f1db107958760788c5f48f', 'md5': '42c6d97d54f1db107958760788c5f48f',
'info_dict': { 'info_dict': {

View File

@@ -7,6 +7,7 @@ from ..utils import (
compat_urllib_parse, compat_urllib_parse,
) )
class MalemotionIE(InfoExtractor): class MalemotionIE(InfoExtractor):
_VALID_URL = r'^(?:https?://)?malemotion\.com/video/(.+?)\.(?P<id>.+?)(#|$)' _VALID_URL = r'^(?:https?://)?malemotion\.com/video/(.+?)\.(?P<id>.+?)(#|$)'
_TEST = { _TEST = {

View File

@@ -54,7 +54,7 @@ class MonikerIE(InfoExtractor):
title = os.path.splitext(data['fname'])[0] title = os.path.splitext(data['fname'])[0]
#Could be several links with different quality # Could be several links with different quality
links = re.findall(r'"file" : "?(.+?)",', webpage) links = re.findall(r'"file" : "?(.+?)",', webpage)
# Assume the links are ordered in quality # Assume the links are ordered in quality
formats = [{ formats = [{

View File

@@ -49,7 +49,7 @@ class MooshareIE(InfoExtractor):
page = self._download_webpage(url, video_id, 'Downloading page') page = self._download_webpage(url, video_id, 'Downloading page')
if re.search(r'>Video Not Found or Deleted<', page) is not None: if re.search(r'>Video Not Found or Deleted<', page) is not None:
raise ExtractorError(u'Video %s does not exist' % video_id, expected=True) raise ExtractorError('Video %s does not exist' % video_id, expected=True)
hash_key = self._html_search_regex(r'<input type="hidden" name="hash" value="([^"]+)">', page, 'hash') hash_key = self._html_search_regex(r'<input type="hidden" name="hash" value="([^"]+)">', page, 'hash')
title = self._html_search_regex(r'(?m)<div class="blockTitle">\s*<h2>Watch ([^<]+)</h2>', page, 'title') title = self._html_search_regex(r'(?m)<div class="blockTitle">\s*<h2>Watch ([^<]+)</h2>', page, 'title')

View File

@@ -27,7 +27,7 @@ class MoviezineIE(InfoExtractor):
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
jsplayer = self._download_webpage('http://www.moviezine.se/api/player.js?video=%s' % video_id, video_id, 'Downloading js api player') jsplayer = self._download_webpage('http://www.moviezine.se/api/player.js?video=%s' % video_id, video_id, 'Downloading js api player')
formats =[{ formats = [{
'format_id': 'sd', 'format_id': 'sd',
'url': self._html_search_regex(r'file: "(.+?)",', jsplayer, 'file'), 'url': self._html_search_regex(r'file: "(.+?)",', jsplayer, 'file'),
'quality': 0, 'quality': 0,

View File

@@ -60,7 +60,7 @@ class MTVServicesInfoExtractor(InfoExtractor):
url = response.geturl() url = response.geturl()
# Transform the url to get the best quality: # Transform the url to get the best quality:
url = re.sub(r'.+pxE=mp4', 'http://mtvnmobile.vo.llnwd.net/kip0/_pxn=0+_pxK=18639+_pxE=mp4', url, 1) url = re.sub(r'.+pxE=mp4', 'http://mtvnmobile.vo.llnwd.net/kip0/_pxn=0+_pxK=18639+_pxE=mp4', url, 1)
return [{'url': url,'ext': 'mp4'}] return [{'url': url, 'ext': 'mp4'}]
def _extract_video_formats(self, mdoc, mtvn_id): def _extract_video_formats(self, mdoc, mtvn_id):
if re.match(r'.*/(error_country_block\.swf|geoblock\.mp4)$', mdoc.find('.//src').text) is not None: if re.match(r'.*/(error_country_block\.swf|geoblock\.mp4)$', mdoc.find('.//src').text) is not None:
@@ -164,7 +164,7 @@ class MTVServicesInfoExtractor(InfoExtractor):
if mgid is None or ':' not in mgid: if mgid is None or ':' not in mgid:
mgid = self._search_regex( mgid = self._search_regex(
[r'data-mgid="(.*?)"', r'swfobject.embedSWF\(".*?(mgid:.*?)"'], [r'data-mgid="(.*?)"', r'swfobject.embedSWF\(".*?(mgid:.*?)"'],
webpage, u'mgid') webpage, 'mgid')
return self._get_videos_info(mgid) return self._get_videos_info(mgid)
@@ -245,7 +245,7 @@ class MTVIE(MTVServicesInfoExtractor):
m_vevo = re.search(r'isVevoVideo = true;.*?vevoVideoId = "(.*?)";', m_vevo = re.search(r'isVevoVideo = true;.*?vevoVideoId = "(.*?)";',
webpage, re.DOTALL) webpage, re.DOTALL)
if m_vevo: if m_vevo:
vevo_id = m_vevo.group(1); vevo_id = m_vevo.group(1)
self.to_screen('Vevo video detected: %s' % vevo_id) self.to_screen('Vevo video detected: %s' % vevo_id)
return self.url_result('vevo:%s' % vevo_id, ie='Vevo') return self.url_result('vevo:%s' % vevo_id, ie='Vevo')

View File

@@ -73,4 +73,3 @@ class MuenchenTVIE(InfoExtractor):
'is_live': True, 'is_live': True,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
} }

View File

@@ -1,47 +1,48 @@
import re from __future__ import unicode_literals
import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_urllib_parse, compat_urllib_parse,
determine_ext,
) )
class MuzuTVIE(InfoExtractor): class MuzuTVIE(InfoExtractor):
_VALID_URL = r'https?://www\.muzu\.tv/(.+?)/(.+?)/(?P<id>\d+)' _VALID_URL = r'https?://www\.muzu\.tv/(.+?)/(.+?)/(?P<id>\d+)'
IE_NAME = u'muzu.tv' IE_NAME = 'muzu.tv'
_TEST = { _TEST = {
u'url': u'http://www.muzu.tv/defected/marcashken-featuring-sos-cat-walk-original-mix-music-video/1981454/', 'url': 'http://www.muzu.tv/defected/marcashken-featuring-sos-cat-walk-original-mix-music-video/1981454/',
u'file': u'1981454.mp4', 'md5': '98f8b2c7bc50578d6a0364fff2bfb000',
u'md5': u'98f8b2c7bc50578d6a0364fff2bfb000', 'info_dict': {
u'info_dict': { 'id': '1981454',
u'title': u'Cat Walk (Original Mix)', 'ext': 'mp4',
u'description': u'md5:90e868994de201b2570e4e5854e19420', 'title': 'Cat Walk (Original Mix)',
u'uploader': u'MarcAshken featuring SOS', 'description': 'md5:90e868994de201b2570e4e5854e19420',
'uploader': 'MarcAshken featuring SOS',
}, },
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
info_data = compat_urllib_parse.urlencode({'format': 'json', info_data = compat_urllib_parse.urlencode({
'format': 'json',
'url': url, 'url': url,
}) })
video_info_page = self._download_webpage('http://www.muzu.tv/api/oembed/?%s' % info_data, info = self._download_json(
video_id, u'Downloading video info') 'http://www.muzu.tv/api/oembed/?%s' % info_data,
info = json.loads(video_info_page) video_id, 'Downloading video info')
player_info_page = self._download_webpage('http://player.muzu.tv/player/playerInit?ai=%s' % video_id, player_info = self._download_json(
video_id, u'Downloading player info') 'http://player.muzu.tv/player/playerInit?ai=%s' % video_id,
video_info = json.loads(player_info_page)['videos'][0] video_id, 'Downloading player info')
for quality in ['1080' , '720', '480', '360']: video_info = player_info['videos'][0]
for quality in ['1080', '720', '480', '360']:
if video_info.get('v%s' % quality): if video_info.get('v%s' % quality):
break break
data = compat_urllib_parse.urlencode({'ai': video_id, data = compat_urllib_parse.urlencode({
'ai': video_id,
# Even if each time you watch a video the hash changes, # Even if each time you watch a video the hash changes,
# it seems to work for different videos, and it will work # it seems to work for different videos, and it will work
# even if you use any non empty string as a hash # even if you use any non empty string as a hash
@@ -49,15 +50,15 @@ class MuzuTVIE(InfoExtractor):
'device': 'web', 'device': 'web',
'qv': quality, 'qv': quality,
}) })
video_url_page = self._download_webpage('http://player.muzu.tv/player/requestVideo?%s' % data, video_url_info = self._download_json(
video_id, u'Downloading video url') 'http://player.muzu.tv/player/requestVideo?%s' % data,
video_url_info = json.loads(video_url_page) video_id, 'Downloading video url')
video_url = video_url_info['url'] video_url = video_url_info['url']
return {'id': video_id, return {
'id': video_id,
'title': info['title'], 'title': info['title'],
'url': video_url, 'url': video_url,
'ext': determine_ext(video_url),
'thumbnail': info['thumbnail_url'], 'thumbnail': info['thumbnail_url'],
'description': info['description'], 'description': info['description'],
'uploader': info['author_name'], 'uploader': info['author_name'],

View File

@@ -4,7 +4,7 @@ import re
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_str, compat_str,
) )
@@ -52,8 +52,8 @@ class MySpaceIE(InfoExtractor):
if mobj.group('mediatype').startswith('music/song'): if mobj.group('mediatype').startswith('music/song'):
# songs don't store any useful info in the 'context' variable # songs don't store any useful info in the 'context' variable
def search_data(name): def search_data(name):
return self._search_regex(r'data-%s="(.*?)"' % name, webpage, return self._search_regex(
name) r'data-%s="(.*?)"' % name, webpage, name)
streamUrl = search_data('stream-url') streamUrl = search_data('stream-url')
info = { info = {
'id': video_id, 'id': video_id,
@@ -62,8 +62,8 @@ class MySpaceIE(InfoExtractor):
'thumbnail': self._og_search_thumbnail(webpage), 'thumbnail': self._og_search_thumbnail(webpage),
} }
else: else:
context = json.loads(self._search_regex(r'context = ({.*?});', webpage, context = json.loads(self._search_regex(
u'context')) r'context = ({.*?});', webpage, 'context'))
video = context['video'] video = context['video']
streamUrl = video['streamUrl'] streamUrl = video['streamUrl']
info = { info = {

View File

@@ -33,7 +33,7 @@ class MyVideoIE(InfoExtractor):
# Original Code from: https://github.com/dersphere/plugin.video.myvideo_de.git # Original Code from: https://github.com/dersphere/plugin.video.myvideo_de.git
# Released into the Public Domain by Tristan Fischer on 2013-05-19 # Released into the Public Domain by Tristan Fischer on 2013-05-19
# https://github.com/rg3/youtube-dl/pull/842 # https://github.com/rg3/youtube-dl/pull/842
def __rc4crypt(self,data, key): def __rc4crypt(self, data, key):
x = 0 x = 0
box = list(range(256)) box = list(range(256))
for i in list(range(256)): for i in list(range(256)):
@@ -49,10 +49,10 @@ class MyVideoIE(InfoExtractor):
out += chr(compat_ord(char) ^ box[(box[x] + box[y]) % 256]) out += chr(compat_ord(char) ^ box[(box[x] + box[y]) % 256])
return out return out
def __md5(self,s): def __md5(self, s):
return hashlib.md5(s).hexdigest().encode() return hashlib.md5(s).hexdigest().encode()
def _real_extract(self,url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id') video_id = mobj.group('id')
@@ -173,4 +173,3 @@ class MyVideoIE(InfoExtractor):
'play_path': video_playpath, 'play_path': video_playpath,
'player_url': video_swfobj, 'player_url': video_swfobj,
} }

View File

@@ -40,7 +40,7 @@ class NaverIE(InfoExtractor):
raise ExtractorError('couldn\'t extract vid and key') raise ExtractorError('couldn\'t extract vid and key')
vid = m_id.group(1) vid = m_id.group(1)
key = m_id.group(2) key = m_id.group(2)
query = compat_urllib_parse.urlencode({'vid': vid, 'inKey': key,}) query = compat_urllib_parse.urlencode({'vid': vid, 'inKey': key, })
query_urls = compat_urllib_parse.urlencode({ query_urls = compat_urllib_parse.urlencode({
'masterVid': vid, 'masterVid': vid,
'protocol': 'p2p', 'protocol': 'p2p',

View File

@@ -39,7 +39,6 @@ class NBAIE(InfoExtractor):
duration = parse_duration( duration = parse_duration(
self._html_search_meta('duration', webpage, 'duration', fatal=False)) self._html_search_meta('duration', webpage, 'duration', fatal=False))
return { return {
'id': shortened_video_id, 'id': shortened_video_id,
'url': video_url, 'url': video_url,

View File

@@ -4,9 +4,11 @@ import re
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import (
compat_urlparse, compat_urlparse,
compat_urllib_parse, compat_urllib_parse,
)
from ..utils import (
unified_strdate, unified_strdate,
) )
@@ -122,7 +124,7 @@ class NHLVideocenterIE(NHLBaseInfoExtractor):
response = self._download_webpage(request_url, playlist_title) response = self._download_webpage(request_url, playlist_title)
response = self._fix_json(response) response = self._fix_json(response)
if not response.strip(): if not response.strip():
self._downloader.report_warning(u'Got an empty reponse, trying ' self._downloader.report_warning('Got an empty reponse, trying '
'adding the "newvideos" parameter') 'adding the "newvideos" parameter')
response = self._download_webpage(request_url + '&newvideos=true', response = self._download_webpage(request_url + '&newvideos=true',
playlist_title) playlist_title)

View File

@@ -27,8 +27,7 @@ class NineGagIE(InfoExtractor):
"thumbnail": "re:^https?://", "thumbnail": "re:^https?://",
}, },
'add_ie': ['Youtube'] 'add_ie': ['Youtube']
}, }, {
{
'url': 'http://9gag.tv/p/KklwM/alternate-banned-opening-scene-of-gravity?ref=fsidebar', 'url': 'http://9gag.tv/p/KklwM/alternate-banned-opening-scene-of-gravity?ref=fsidebar',
'info_dict': { 'info_dict': {
'id': 'KklwM', 'id': 'KklwM',

View File

@@ -97,4 +97,3 @@ class OoyalaIE(InfoExtractor):
} }
else: else:
return self._extract_result(videos_info[0], videos_more_info) return self._extract_result(videos_info[0], videos_more_info)

View File

@@ -6,6 +6,7 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import int_or_none from ..utils import int_or_none
class PodomaticIE(InfoExtractor): class PodomaticIE(InfoExtractor):
IE_NAME = 'podomatic' IE_NAME = 'podomatic'
_VALID_URL = r'^(?P<proto>https?)://(?P<channel>[^.]+)\.podomatic\.com/entry/(?P<id>[^?]+)' _VALID_URL = r'^(?P<proto>https?)://(?P<channel>[^.]+)\.podomatic\.com/entry/(?P<id>[^?]+)'

View File

@@ -56,7 +56,7 @@ class PornHubIE(InfoExtractor):
comment_count = self._extract_count( comment_count = self._extract_count(
r'All comments \(<var class="videoCommentCount">([\d,\.]+)</var>', webpage, 'comment') r'All comments \(<var class="videoCommentCount">([\d,\.]+)</var>', webpage, 'comment')
video_urls = list(map(compat_urllib_parse.unquote , re.findall(r'"quality_[0-9]{3}p":"([^"]+)', webpage))) video_urls = list(map(compat_urllib_parse.unquote, re.findall(r'"quality_[0-9]{3}p":"([^"]+)', webpage)))
if webpage.find('"encrypted":true') != -1: if webpage.find('"encrypted":true') != -1:
password = compat_urllib_parse.unquote_plus(self._html_search_regex(r'"video_title":"([^"]+)', webpage, 'password')) password = compat_urllib_parse.unquote_plus(self._html_search_regex(r'"video_title":"([^"]+)', webpage, 'password'))
video_urls = list(map(lambda s: aes_decrypt_text(s, password, 32).decode('utf-8'), video_urls)) video_urls = list(map(lambda s: aes_decrypt_text(s, password, 32).decode('utf-8'), video_urls))

View File

@@ -38,7 +38,7 @@ class PornotubeIE(InfoExtractor):
video_url = self._search_regex(VIDEO_URL_RE, webpage, 'video url') video_url = self._search_regex(VIDEO_URL_RE, webpage, 'video url')
video_url = compat_urllib_parse.unquote(video_url) video_url = compat_urllib_parse.unquote(video_url)
#Get the uploaded date # Get the uploaded date
VIDEO_UPLOADED_RE = r'<div class="video_added_by">Added (?P<date>[0-9\/]+) by' VIDEO_UPLOADED_RE = r'<div class="video_added_by">Added (?P<date>[0-9\/]+) by'
upload_date = self._html_search_regex(VIDEO_UPLOADED_RE, webpage, 'upload date', fatal=False) upload_date = self._html_search_regex(VIDEO_UPLOADED_RE, webpage, 'upload date', fatal=False)
if upload_date: if upload_date:

View File

@@ -1,7 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
@@ -9,32 +7,23 @@ class RedTubeIE(InfoExtractor):
_VALID_URL = r'http://(?:www\.)?redtube\.com/(?P<id>[0-9]+)' _VALID_URL = r'http://(?:www\.)?redtube\.com/(?P<id>[0-9]+)'
_TEST = { _TEST = {
'url': 'http://www.redtube.com/66418', 'url': 'http://www.redtube.com/66418',
'file': '66418.mp4',
# md5 varies from time to time, as in
# https://travis-ci.org/rg3/youtube-dl/jobs/14052463#L295
#'md5': u'7b8c22b5e7098a3e1c09709df1126d2d',
'info_dict': { 'info_dict': {
'id': '66418',
'ext': 'mp4',
"title": "Sucked on a toilet", "title": "Sucked on a toilet",
"age_limit": 18, "age_limit": 18,
} }
} }
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('id')
video_extension = 'mp4'
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
self.report_extraction(video_id)
video_url = self._html_search_regex( video_url = self._html_search_regex(
r'<source src="(.+?)" type="video/mp4">', webpage, u'video URL') r'<source src="(.+?)" type="video/mp4">', webpage, 'video URL')
video_title = self._html_search_regex( video_title = self._html_search_regex(
r'<h1 class="videoTitle[^"]*">(.+?)</h1>', r'<h1 class="videoTitle[^"]*">(.+?)</h1>',
webpage, u'title') webpage, 'title')
video_thumbnail = self._og_search_thumbnail(webpage) video_thumbnail = self._og_search_thumbnail(webpage)
# No self-labeling, but they describe themselves as # No self-labeling, but they describe themselves as
@@ -44,7 +33,7 @@ class RedTubeIE(InfoExtractor):
return { return {
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'ext': video_extension, 'ext': 'mp4',
'title': video_title, 'title': video_title,
'thumbnail': video_thumbnail, 'thumbnail': video_thumbnail,
'age_limit': age_limit, 'age_limit': age_limit,

View File

@@ -41,4 +41,3 @@ class RingTVIE(InfoExtractor):
'thumbnail': thumbnail_url, 'thumbnail': thumbnail_url,
'description': description, 'description': description,
} }

Some files were not shown because too many files have changed in this diff Show More