Compare commits
1 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
00b350d209 |
63
.github/ISSUE_TEMPLATE/1_broken_site.md
vendored
63
.github/ISSUE_TEMPLATE/1_broken_site.md
vendored
@ -1,63 +0,0 @@
|
|||||||
---
|
|
||||||
name: Broken site support
|
|
||||||
about: Report broken or misfunctioning site
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.09.20. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.09.20**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
|
||||||
- [ ] I've searched the bugtracker for similar issues including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Verbose log
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
|
|
||||||
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version 2020.09.20
|
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
<more lines>
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
PASTE VERBOSE LOG HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
54
.github/ISSUE_TEMPLATE/2_site_support_request.md
vendored
54
.github/ISSUE_TEMPLATE/2_site_support_request.md
vendored
@ -1,54 +0,0 @@
|
|||||||
---
|
|
||||||
name: Site support request
|
|
||||||
about: Request support for a new site
|
|
||||||
title: ''
|
|
||||||
labels: 'site-support-request'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.09.20. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
|
|
||||||
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a new site support request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.09.20**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that none of provided URLs violate any copyrights
|
|
||||||
- [ ] I've searched the bugtracker for similar site support requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Example URLs
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
|
|
||||||
-->
|
|
||||||
|
|
||||||
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
|
|
||||||
- Single video: https://youtu.be/BaW_jenozKc
|
|
||||||
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide any additional information.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
37
.github/ISSUE_TEMPLATE/3_site_feature_request.md
vendored
37
.github/ISSUE_TEMPLATE/3_site_feature_request.md
vendored
@ -1,37 +0,0 @@
|
|||||||
---
|
|
||||||
name: Site feature request
|
|
||||||
about: Request a new functionality for a site
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.09.20. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a site feature request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.09.20**
|
|
||||||
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
65
.github/ISSUE_TEMPLATE/4_bug_report.md
vendored
65
.github/ISSUE_TEMPLATE/4_bug_report.md
vendored
@ -1,65 +0,0 @@
|
|||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Report a bug unrelated to any particular site or extractor
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.09.20. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Read bugs section in FAQ: http://yt-dl.org/reporting
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support issue
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.09.20**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
|
||||||
- [ ] I've searched the bugtracker for similar bug reports including closed ones
|
|
||||||
- [ ] I've read bugs section in FAQ
|
|
||||||
|
|
||||||
|
|
||||||
## Verbose log
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
|
|
||||||
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version 2020.09.20
|
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
<more lines>
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
PASTE VERBOSE LOG HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
38
.github/ISSUE_TEMPLATE/5_feature_request.md
vendored
38
.github/ISSUE_TEMPLATE/5_feature_request.md
vendored
@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Request a new functionality unrelated to any particular site or extractor
|
|
||||||
title: ''
|
|
||||||
labels: 'request'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.09.20. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a feature request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.09.20**
|
|
||||||
- [ ] I've searched the bugtracker for similar feature requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
38
.github/ISSUE_TEMPLATE/6_question.md
vendored
38
.github/ISSUE_TEMPLATE/6_question.md
vendored
@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
name: Ask question
|
|
||||||
about: Ask youtube-dl related question
|
|
||||||
title: ''
|
|
||||||
labels: 'question'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
|
|
||||||
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm asking a question
|
|
||||||
- [ ] I've looked through the README and FAQ for similar questions
|
|
||||||
- [ ] I've searched the bugtracker for similar questions including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Question
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE QUESTION HERE
|
|
63
.github/ISSUE_TEMPLATE_tmpl/1_broken_site.md
vendored
63
.github/ISSUE_TEMPLATE_tmpl/1_broken_site.md
vendored
@ -1,63 +0,0 @@
|
|||||||
---
|
|
||||||
name: Broken site support
|
|
||||||
about: Report broken or misfunctioning site
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
|
||||||
- [ ] I've searched the bugtracker for similar issues including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Verbose log
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
|
|
||||||
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version %(version)s
|
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
<more lines>
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
PASTE VERBOSE LOG HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
@ -1,54 +0,0 @@
|
|||||||
---
|
|
||||||
name: Site support request
|
|
||||||
about: Request support for a new site
|
|
||||||
title: ''
|
|
||||||
labels: 'site-support-request'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
|
|
||||||
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a new site support request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that none of provided URLs violate any copyrights
|
|
||||||
- [ ] I've searched the bugtracker for similar site support requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Example URLs
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
|
|
||||||
-->
|
|
||||||
|
|
||||||
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
|
|
||||||
- Single video: https://youtu.be/BaW_jenozKc
|
|
||||||
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide any additional information.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
@ -1,37 +0,0 @@
|
|||||||
---
|
|
||||||
name: Site feature request
|
|
||||||
about: Request a new functionality for a site
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a site feature request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
65
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.md
vendored
65
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.md
vendored
@ -1,65 +0,0 @@
|
|||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Report a bug unrelated to any particular site or extractor
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Read bugs section in FAQ: http://yt-dl.org/reporting
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support issue
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
|
||||||
- [ ] I've searched the bugtracker for similar bug reports including closed ones
|
|
||||||
- [ ] I've read bugs section in FAQ
|
|
||||||
|
|
||||||
|
|
||||||
## Verbose log
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
|
|
||||||
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version %(version)s
|
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
<more lines>
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
PASTE VERBOSE LOG HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
38
.github/ISSUE_TEMPLATE_tmpl/5_feature_request.md
vendored
38
.github/ISSUE_TEMPLATE_tmpl/5_feature_request.md
vendored
@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Request a new functionality unrelated to any particular site or extractor
|
|
||||||
title: ''
|
|
||||||
labels: 'request'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a feature request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've searched the bugtracker for similar feature requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
28
.github/PULL_REQUEST_TEMPLATE.md
vendored
28
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -1,28 +0,0 @@
|
|||||||
## Please follow the guide below
|
|
||||||
|
|
||||||
- You will be asked some questions, please read them **carefully** and answer honestly
|
|
||||||
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
|
|
||||||
- Use *Preview* tab to see how your *pull request* will actually look like
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Before submitting a *pull request* make sure you have:
|
|
||||||
- [ ] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections
|
|
||||||
- [ ] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
|
|
||||||
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
|
|
||||||
|
|
||||||
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
|
|
||||||
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
|
|
||||||
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
|
|
||||||
|
|
||||||
### What is the purpose of your *pull request*?
|
|
||||||
- [ ] Bug fix
|
|
||||||
- [ ] Improvement
|
|
||||||
- [ ] New extractor
|
|
||||||
- [ ] New feature
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Description of your *pull request* and other information
|
|
||||||
|
|
||||||
Explanation of your *pull request* in arbitrary form goes here. Please make sure the description explains the purpose and effect of your *pull request* and is worded well enough to be understood. Provide as much context and examples as possible.
|
|
27
.gitignore
vendored
27
.gitignore
vendored
@ -1,6 +1,5 @@
|
|||||||
*.pyc
|
*.pyc
|
||||||
*.pyo
|
*.pyo
|
||||||
*.class
|
|
||||||
*~
|
*~
|
||||||
*.DS_Store
|
*.DS_Store
|
||||||
wine-py2exe/
|
wine-py2exe/
|
||||||
@ -12,8 +11,6 @@ MANIFEST
|
|||||||
README.txt
|
README.txt
|
||||||
youtube-dl.1
|
youtube-dl.1
|
||||||
youtube-dl.bash-completion
|
youtube-dl.bash-completion
|
||||||
youtube-dl.fish
|
|
||||||
youtube_dl/extractor/lazy_extractors.py
|
|
||||||
youtube-dl
|
youtube-dl
|
||||||
youtube-dl.exe
|
youtube-dl.exe
|
||||||
youtube-dl.tar.gz
|
youtube-dl.tar.gz
|
||||||
@ -22,32 +19,10 @@ cover/
|
|||||||
updates_key.pem
|
updates_key.pem
|
||||||
*.egg-info
|
*.egg-info
|
||||||
*.srt
|
*.srt
|
||||||
*.ttml
|
|
||||||
*.sbv
|
*.sbv
|
||||||
*.vtt
|
*.vtt
|
||||||
*.flv
|
*.flv
|
||||||
*.mp4
|
*.mp4
|
||||||
*.m4a
|
|
||||||
*.m4v
|
|
||||||
*.mp3
|
|
||||||
*.3gp
|
|
||||||
*.wav
|
|
||||||
*.ape
|
|
||||||
*.mkv
|
|
||||||
*.swf
|
|
||||||
*.part
|
*.part
|
||||||
*.ytdl
|
test/testdata
|
||||||
*.swp
|
|
||||||
test/local_parameters.json
|
|
||||||
.tox
|
.tox
|
||||||
youtube-dl.zsh
|
|
||||||
|
|
||||||
# IntelliJ related files
|
|
||||||
.idea
|
|
||||||
*.iml
|
|
||||||
|
|
||||||
tmp/
|
|
||||||
venv/
|
|
||||||
|
|
||||||
# VS Code related files
|
|
||||||
.vscode
|
|
||||||
|
57
.travis.yml
57
.travis.yml
@ -2,49 +2,18 @@ language: python
|
|||||||
python:
|
python:
|
||||||
- "2.6"
|
- "2.6"
|
||||||
- "2.7"
|
- "2.7"
|
||||||
- "3.2"
|
|
||||||
- "3.3"
|
- "3.3"
|
||||||
- "3.4"
|
|
||||||
- "3.5"
|
|
||||||
- "3.6"
|
|
||||||
- "pypy"
|
|
||||||
- "pypy3"
|
|
||||||
dist: trusty
|
|
||||||
env:
|
|
||||||
- YTDL_TEST_SET=core
|
|
||||||
- YTDL_TEST_SET=download
|
|
||||||
jobs:
|
|
||||||
include:
|
|
||||||
- python: 3.7
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=core
|
|
||||||
- python: 3.7
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=download
|
|
||||||
- python: 3.8
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=core
|
|
||||||
- python: 3.8
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=download
|
|
||||||
- python: 3.8-dev
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=core
|
|
||||||
- python: 3.8-dev
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=download
|
|
||||||
- env: JYTHON=true; YTDL_TEST_SET=core
|
|
||||||
- env: JYTHON=true; YTDL_TEST_SET=download
|
|
||||||
- name: flake8
|
|
||||||
python: 3.8
|
|
||||||
dist: xenial
|
|
||||||
install: pip install flake8
|
|
||||||
script: flake8 .
|
|
||||||
fast_finish: true
|
|
||||||
allow_failures:
|
|
||||||
- env: YTDL_TEST_SET=download
|
|
||||||
- env: JYTHON=true; YTDL_TEST_SET=core
|
|
||||||
- env: JYTHON=true; YTDL_TEST_SET=download
|
|
||||||
before_install:
|
before_install:
|
||||||
- if [ "$JYTHON" == "true" ]; then ./devscripts/install_jython.sh; export PATH="$HOME/jython/bin:$PATH"; fi
|
- sudo apt-get update -qq
|
||||||
script: ./devscripts/run_tests.sh
|
- sudo apt-get install -qq rtmpdump
|
||||||
|
script: nosetests test --verbose
|
||||||
|
notifications:
|
||||||
|
email:
|
||||||
|
- filippo.valsorda@gmail.com
|
||||||
|
- phihag@phihag.de
|
||||||
|
- jaime.marquinez.ferrandiz+travis@gmail.com
|
||||||
|
- yasoob.khld@gmail.com
|
||||||
|
# irc:
|
||||||
|
# channels:
|
||||||
|
# - "irc.freenode.org#youtube-dl"
|
||||||
|
# skip_join: true
|
||||||
|
248
AUTHORS
248
AUTHORS
@ -1,248 +0,0 @@
|
|||||||
Ricardo Garcia Gonzalez
|
|
||||||
Danny Colligan
|
|
||||||
Benjamin Johnson
|
|
||||||
Vasyl' Vavrychuk
|
|
||||||
Witold Baryluk
|
|
||||||
Paweł Paprota
|
|
||||||
Gergely Imreh
|
|
||||||
Rogério Brito
|
|
||||||
Philipp Hagemeister
|
|
||||||
Sören Schulze
|
|
||||||
Kevin Ngo
|
|
||||||
Ori Avtalion
|
|
||||||
shizeeg
|
|
||||||
Filippo Valsorda
|
|
||||||
Christian Albrecht
|
|
||||||
Dave Vasilevsky
|
|
||||||
Jaime Marquínez Ferrándiz
|
|
||||||
Jeff Crouse
|
|
||||||
Osama Khalid
|
|
||||||
Michael Walter
|
|
||||||
M. Yasoob Ullah Khalid
|
|
||||||
Julien Fraichard
|
|
||||||
Johny Mo Swag
|
|
||||||
Axel Noack
|
|
||||||
Albert Kim
|
|
||||||
Pierre Rudloff
|
|
||||||
Huarong Huo
|
|
||||||
Ismael Mejía
|
|
||||||
Steffan Donal
|
|
||||||
Andras Elso
|
|
||||||
Jelle van der Waa
|
|
||||||
Marcin Cieślak
|
|
||||||
Anton Larionov
|
|
||||||
Takuya Tsuchida
|
|
||||||
Sergey M.
|
|
||||||
Michael Orlitzky
|
|
||||||
Chris Gahan
|
|
||||||
Saimadhav Heblikar
|
|
||||||
Mike Col
|
|
||||||
Oleg Prutz
|
|
||||||
pulpe
|
|
||||||
Andreas Schmitz
|
|
||||||
Michael Kaiser
|
|
||||||
Niklas Laxström
|
|
||||||
David Triendl
|
|
||||||
Anthony Weems
|
|
||||||
David Wagner
|
|
||||||
Juan C. Olivares
|
|
||||||
Mattias Harrysson
|
|
||||||
phaer
|
|
||||||
Sainyam Kapoor
|
|
||||||
Nicolas Évrard
|
|
||||||
Jason Normore
|
|
||||||
Hoje Lee
|
|
||||||
Adam Thalhammer
|
|
||||||
Georg Jähnig
|
|
||||||
Ralf Haring
|
|
||||||
Koki Takahashi
|
|
||||||
Ariset Llerena
|
|
||||||
Adam Malcontenti-Wilson
|
|
||||||
Tobias Bell
|
|
||||||
Naglis Jonaitis
|
|
||||||
Charles Chen
|
|
||||||
Hassaan Ali
|
|
||||||
Dobrosław Żybort
|
|
||||||
David Fabijan
|
|
||||||
Sebastian Haas
|
|
||||||
Alexander Kirk
|
|
||||||
Erik Johnson
|
|
||||||
Keith Beckman
|
|
||||||
Ole Ernst
|
|
||||||
Aaron McDaniel (mcd1992)
|
|
||||||
Magnus Kolstad
|
|
||||||
Hari Padmanaban
|
|
||||||
Carlos Ramos
|
|
||||||
5moufl
|
|
||||||
lenaten
|
|
||||||
Dennis Scheiba
|
|
||||||
Damon Timm
|
|
||||||
winwon
|
|
||||||
Xavier Beynon
|
|
||||||
Gabriel Schubiner
|
|
||||||
xantares
|
|
||||||
Jan Matějka
|
|
||||||
Mauroy Sébastien
|
|
||||||
William Sewell
|
|
||||||
Dao Hoang Son
|
|
||||||
Oskar Jauch
|
|
||||||
Matthew Rayfield
|
|
||||||
t0mm0
|
|
||||||
Tithen-Firion
|
|
||||||
Zack Fernandes
|
|
||||||
cryptonaut
|
|
||||||
Adrian Kretz
|
|
||||||
Mathias Rav
|
|
||||||
Petr Kutalek
|
|
||||||
Will Glynn
|
|
||||||
Max Reimann
|
|
||||||
Cédric Luthi
|
|
||||||
Thijs Vermeir
|
|
||||||
Joel Leclerc
|
|
||||||
Christopher Krooss
|
|
||||||
Ondřej Caletka
|
|
||||||
Dinesh S
|
|
||||||
Johan K. Jensen
|
|
||||||
Yen Chi Hsuan
|
|
||||||
Enam Mijbah Noor
|
|
||||||
David Luhmer
|
|
||||||
Shaya Goldberg
|
|
||||||
Paul Hartmann
|
|
||||||
Frans de Jonge
|
|
||||||
Robin de Rooij
|
|
||||||
Ryan Schmidt
|
|
||||||
Leslie P. Polzer
|
|
||||||
Duncan Keall
|
|
||||||
Alexander Mamay
|
|
||||||
Devin J. Pohly
|
|
||||||
Eduardo Ferro Aldama
|
|
||||||
Jeff Buchbinder
|
|
||||||
Amish Bhadeshia
|
|
||||||
Joram Schrijver
|
|
||||||
Will W.
|
|
||||||
Mohammad Teimori Pabandi
|
|
||||||
Roman Le Négrate
|
|
||||||
Matthias Küch
|
|
||||||
Julian Richen
|
|
||||||
Ping O.
|
|
||||||
Mister Hat
|
|
||||||
Peter Ding
|
|
||||||
jackyzy823
|
|
||||||
George Brighton
|
|
||||||
Remita Amine
|
|
||||||
Aurélio A. Heckert
|
|
||||||
Bernhard Minks
|
|
||||||
sceext
|
|
||||||
Zach Bruggeman
|
|
||||||
Tjark Saul
|
|
||||||
slangangular
|
|
||||||
Behrouz Abbasi
|
|
||||||
ngld
|
|
||||||
nyuszika7h
|
|
||||||
Shaun Walbridge
|
|
||||||
Lee Jenkins
|
|
||||||
Anssi Hannula
|
|
||||||
Lukáš Lalinský
|
|
||||||
Qijiang Fan
|
|
||||||
Rémy Léone
|
|
||||||
Marco Ferragina
|
|
||||||
reiv
|
|
||||||
Muratcan Simsek
|
|
||||||
Evan Lu
|
|
||||||
flatgreen
|
|
||||||
Brian Foley
|
|
||||||
Vignesh Venkat
|
|
||||||
Tom Gijselinck
|
|
||||||
Founder Fang
|
|
||||||
Andrew Alexeyew
|
|
||||||
Saso Bezlaj
|
|
||||||
Erwin de Haan
|
|
||||||
Jens Wille
|
|
||||||
Robin Houtevelts
|
|
||||||
Patrick Griffis
|
|
||||||
Aidan Rowe
|
|
||||||
mutantmonkey
|
|
||||||
Ben Congdon
|
|
||||||
Kacper Michajłow
|
|
||||||
José Joaquín Atria
|
|
||||||
Viťas Strádal
|
|
||||||
Kagami Hiiragi
|
|
||||||
Philip Huppert
|
|
||||||
blahgeek
|
|
||||||
Kevin Deldycke
|
|
||||||
inondle
|
|
||||||
Tomáš Čech
|
|
||||||
Déstin Reed
|
|
||||||
Roman Tsiupa
|
|
||||||
Artur Krysiak
|
|
||||||
Jakub Adam Wieczorek
|
|
||||||
Aleksandar Topuzović
|
|
||||||
Nehal Patel
|
|
||||||
Rob van Bekkum
|
|
||||||
Petr Zvoníček
|
|
||||||
Pratyush Singh
|
|
||||||
Aleksander Nitecki
|
|
||||||
Sebastian Blunt
|
|
||||||
Matěj Cepl
|
|
||||||
Xie Yanbo
|
|
||||||
Philip Xu
|
|
||||||
John Hawkinson
|
|
||||||
Rich Leeper
|
|
||||||
Zhong Jianxin
|
|
||||||
Thor77
|
|
||||||
Mattias Wadman
|
|
||||||
Arjan Verwer
|
|
||||||
Costy Petrisor
|
|
||||||
Logan B
|
|
||||||
Alex Seiler
|
|
||||||
Vijay Singh
|
|
||||||
Paul Hartmann
|
|
||||||
Stephen Chen
|
|
||||||
Fabian Stahl
|
|
||||||
Bagira
|
|
||||||
Odd Stråbø
|
|
||||||
Philip Herzog
|
|
||||||
Thomas Christlieb
|
|
||||||
Marek Rusinowski
|
|
||||||
Tobias Gruetzmacher
|
|
||||||
Olivier Bilodeau
|
|
||||||
Lars Vierbergen
|
|
||||||
Juanjo Benages
|
|
||||||
Xiao Di Guan
|
|
||||||
Thomas Winant
|
|
||||||
Daniel Twardowski
|
|
||||||
Jeremie Jarosh
|
|
||||||
Gerard Rovira
|
|
||||||
Marvin Ewald
|
|
||||||
Frédéric Bournival
|
|
||||||
Timendum
|
|
||||||
gritstub
|
|
||||||
Adam Voss
|
|
||||||
Mike Fährmann
|
|
||||||
Jan Kundrát
|
|
||||||
Giuseppe Fabiano
|
|
||||||
Örn Guðjónsson
|
|
||||||
Parmjit Virk
|
|
||||||
Genki Sky
|
|
||||||
Ľuboš Katrinec
|
|
||||||
Corey Nicholson
|
|
||||||
Ashutosh Chaudhary
|
|
||||||
John Dong
|
|
||||||
Tatsuyuki Ishi
|
|
||||||
Daniel Weber
|
|
||||||
Kay Bouché
|
|
||||||
Yang Hongbo
|
|
||||||
Lei Wang
|
|
||||||
Petr Novák
|
|
||||||
Leonardo Taccari
|
|
||||||
Martin Weinelt
|
|
||||||
Surya Oktafendri
|
|
||||||
TingPing
|
|
||||||
Alexandre Macabies
|
|
||||||
Bastian de Groot
|
|
||||||
Niklas Haas
|
|
||||||
András Veres-Szentkirályi
|
|
||||||
Enes Solak
|
|
||||||
Nathan Rossi
|
|
||||||
Thomas van der Berg
|
|
||||||
Luca Cherubin
|
|
14
CHANGELOG
Normal file
14
CHANGELOG
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
2013.01.02 Codename: GIULIA
|
||||||
|
|
||||||
|
* Add support for ComedyCentral clips <nto>
|
||||||
|
* Corrected Vimeo description fetching <Nick Daniels>
|
||||||
|
* Added the --no-post-overwrites argument <Barbu Paul - Gheorghe>
|
||||||
|
* --verbose offers more environment info
|
||||||
|
* New info_dict field: uploader_id
|
||||||
|
* New updates system, with signature checking
|
||||||
|
* New IEs: NBA, JustinTV, FunnyOrDie, TweetReel, Steam, Ustream
|
||||||
|
* Fixed IEs: BlipTv
|
||||||
|
* Fixed for Python 3 IEs: Xvideo, Youku, XNXX, Dailymotion, Vimeo, InfoQ
|
||||||
|
* Simplified IEs and test code
|
||||||
|
* Various (Python 3 and other) fixes
|
||||||
|
* Revamped and expanded tests
|
434
CONTRIBUTING.md
434
CONTRIBUTING.md
@ -1,434 +0,0 @@
|
|||||||
**Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
|
|
||||||
```
|
|
||||||
$ youtube-dl -v <your command line>
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version 2015.12.06
|
|
||||||
[debug] Git HEAD: 135392e
|
|
||||||
[debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
...
|
|
||||||
```
|
|
||||||
**Do not post screenshots of verbose logs; only plain text is acceptable.**
|
|
||||||
|
|
||||||
The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
|
|
||||||
|
|
||||||
Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
|
|
||||||
|
|
||||||
### Is the description of the issue itself sufficient?
|
|
||||||
|
|
||||||
We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts.
|
|
||||||
|
|
||||||
So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
|
|
||||||
|
|
||||||
- What the problem is
|
|
||||||
- How it could be fixed
|
|
||||||
- How your proposed solution would look like
|
|
||||||
|
|
||||||
If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
|
|
||||||
|
|
||||||
For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the `-v` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
|
|
||||||
|
|
||||||
If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/).
|
|
||||||
|
|
||||||
**Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
|
|
||||||
|
|
||||||
### Are you using the latest version?
|
|
||||||
|
|
||||||
Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
|
|
||||||
|
|
||||||
### Is the issue already documented?
|
|
||||||
|
|
||||||
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/ytdl-org/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
|
|
||||||
|
|
||||||
### Why are existing options not enough?
|
|
||||||
|
|
||||||
Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
|
|
||||||
|
|
||||||
### Is there enough context in your bug report?
|
|
||||||
|
|
||||||
People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
|
|
||||||
|
|
||||||
We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
|
|
||||||
|
|
||||||
### Does the issue involve one problem, and one problem only?
|
|
||||||
|
|
||||||
Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
|
|
||||||
|
|
||||||
In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
|
|
||||||
|
|
||||||
### Is anyone going to need the feature?
|
|
||||||
|
|
||||||
Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
|
|
||||||
|
|
||||||
### Is your question about youtube-dl?
|
|
||||||
|
|
||||||
It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different, or even the reporter's own, application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
|
|
||||||
|
|
||||||
# DEVELOPER INSTRUCTIONS
|
|
||||||
|
|
||||||
Most users do not need to build youtube-dl and can [download the builds](https://ytdl-org.github.io/youtube-dl/download.html) or get them from their distribution.
|
|
||||||
|
|
||||||
To run youtube-dl as a developer, you don't need to build anything either. Simply execute
|
|
||||||
|
|
||||||
python -m youtube_dl
|
|
||||||
|
|
||||||
To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
|
|
||||||
|
|
||||||
python -m unittest discover
|
|
||||||
python test/test_download.py
|
|
||||||
nosetests
|
|
||||||
|
|
||||||
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
|
|
||||||
|
|
||||||
If you want to create a build of youtube-dl yourself, you'll need
|
|
||||||
|
|
||||||
* python
|
|
||||||
* make (only GNU make is supported)
|
|
||||||
* pandoc
|
|
||||||
* zip
|
|
||||||
* nosetests
|
|
||||||
|
|
||||||
### Adding support for a new site
|
|
||||||
|
|
||||||
If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. youtube-dl does **not support** such sites thus pull requests adding support for them **will be rejected**.
|
|
||||||
|
|
||||||
After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`):
|
|
||||||
|
|
||||||
1. [Fork this repository](https://github.com/ytdl-org/youtube-dl/fork)
|
|
||||||
2. Check out the source code with:
|
|
||||||
|
|
||||||
git clone git@github.com:YOUR_GITHUB_USERNAME/youtube-dl.git
|
|
||||||
|
|
||||||
3. Start a new git branch with
|
|
||||||
|
|
||||||
cd youtube-dl
|
|
||||||
git checkout -b yourextractor
|
|
||||||
|
|
||||||
4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
from .common import InfoExtractor
|
|
||||||
|
|
||||||
|
|
||||||
class YourExtractorIE(InfoExtractor):
|
|
||||||
_VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
|
|
||||||
_TEST = {
|
|
||||||
'url': 'https://yourextractor.com/watch/42',
|
|
||||||
'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
|
|
||||||
'info_dict': {
|
|
||||||
'id': '42',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'Video title goes here',
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
# TODO more properties, either as:
|
|
||||||
# * A value
|
|
||||||
# * MD5 checksum; start the string with md5:
|
|
||||||
# * A regular expression; start the string with re:
|
|
||||||
# * Any Python type (for example int or float)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
video_id = self._match_id(url)
|
|
||||||
webpage = self._download_webpage(url, video_id)
|
|
||||||
|
|
||||||
# TODO more code goes here, for example ...
|
|
||||||
title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
|
|
||||||
|
|
||||||
return {
|
|
||||||
'id': video_id,
|
|
||||||
'title': title,
|
|
||||||
'description': self._og_search_description(webpage),
|
|
||||||
'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
|
|
||||||
# TODO more properties (see youtube_dl/extractor/common.py)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
|
|
||||||
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in.
|
|
||||||
7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want.
|
|
||||||
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
|
|
||||||
|
|
||||||
$ flake8 youtube_dl/extractor/yourextractor.py
|
|
||||||
|
|
||||||
9. Make sure your code works under all [Python](https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+.
|
|
||||||
10. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files and [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
|
|
||||||
|
|
||||||
$ git add youtube_dl/extractor/extractors.py
|
|
||||||
$ git add youtube_dl/extractor/yourextractor.py
|
|
||||||
$ git commit -m '[yourextractor] Add new extractor'
|
|
||||||
$ git push origin yourextractor
|
|
||||||
|
|
||||||
11. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
|
|
||||||
|
|
||||||
In any case, thank you very much for your contributions!
|
|
||||||
|
|
||||||
## youtube-dl coding conventions
|
|
||||||
|
|
||||||
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
|
|
||||||
|
|
||||||
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all.
|
|
||||||
|
|
||||||
### Mandatory and optional metafields
|
|
||||||
|
|
||||||
For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an [information dictionary](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by youtube-dl:
|
|
||||||
|
|
||||||
- `id` (media identifier)
|
|
||||||
- `title` (media title)
|
|
||||||
- `url` (media download URL) or `formats`
|
|
||||||
|
|
||||||
In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats `id` and `title` as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken.
|
|
||||||
|
|
||||||
[Any field](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L188-L303) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Say you have some source dictionary `meta` that you've fetched as JSON with HTTP request and it has a key `summary`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
meta = self._download_json(url, video_id)
|
|
||||||
```
|
|
||||||
|
|
||||||
Assume at this point `meta`'s layout is:
|
|
||||||
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
...
|
|
||||||
"summary": "some fancy summary text",
|
|
||||||
...
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Assume you want to extract `summary` and put it into the resulting info dict as `description`. Since `description` is an optional meta field you should be ready that this key may be missing from the `meta` dict, so that you should extract it like:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = meta.get('summary') # correct
|
|
||||||
```
|
|
||||||
|
|
||||||
and not like:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = meta['summary'] # incorrect
|
|
||||||
```
|
|
||||||
|
|
||||||
The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data).
|
|
||||||
|
|
||||||
Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = self._search_regex(
|
|
||||||
r'<span[^>]+id="title"[^>]*>([^<]+)<',
|
|
||||||
webpage, 'description', fatal=False)
|
|
||||||
```
|
|
||||||
|
|
||||||
With `fatal` set to `False` if `_search_regex` fails to extract `description` it will emit a warning and continue extraction.
|
|
||||||
|
|
||||||
You can also pass `default=<some fallback value>`, for example:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = self._search_regex(
|
|
||||||
r'<span[^>]+id="title"[^>]*>([^<]+)<',
|
|
||||||
webpage, 'description', default=None)
|
|
||||||
```
|
|
||||||
|
|
||||||
On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
|
|
||||||
|
|
||||||
### Provide fallbacks
|
|
||||||
|
|
||||||
When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = meta['title']
|
|
||||||
```
|
|
||||||
|
|
||||||
If `title` disappears from `meta` in future due to some changes on the hoster's side the extraction would fail since `title` is mandatory. That's expected.
|
|
||||||
|
|
||||||
Assume that you have some another source you can extract `title` from, for example `og:title` HTML meta of a `webpage`. In this case you can provide a fallback scenario:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = meta.get('title') or self._og_search_title(webpage)
|
|
||||||
```
|
|
||||||
|
|
||||||
This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`.
|
|
||||||
|
|
||||||
### Regular expressions
|
|
||||||
|
|
||||||
#### Don't capture groups you don't use
|
|
||||||
|
|
||||||
Capturing group must be an indication that it's used somewhere in the code. Any group that is not used must be non capturing.
|
|
||||||
|
|
||||||
##### Example
|
|
||||||
|
|
||||||
Don't capture id attribute name here since you can't use it for anything anyway.
|
|
||||||
|
|
||||||
Correct:
|
|
||||||
|
|
||||||
```python
|
|
||||||
r'(?:id|ID)=(?P<id>\d+)'
|
|
||||||
```
|
|
||||||
|
|
||||||
Incorrect:
|
|
||||||
```python
|
|
||||||
r'(id|ID)=(?P<id>\d+)'
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Make regular expressions relaxed and flexible
|
|
||||||
|
|
||||||
When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on.
|
|
||||||
|
|
||||||
##### Example
|
|
||||||
|
|
||||||
Say you need to extract `title` from the following HTML code:
|
|
||||||
|
|
||||||
```html
|
|
||||||
<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">some fancy title</span>
|
|
||||||
```
|
|
||||||
|
|
||||||
The code for that task should look similar to:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = self._search_regex(
|
|
||||||
r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
|
|
||||||
```
|
|
||||||
|
|
||||||
Or even better:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = self._search_regex(
|
|
||||||
r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)',
|
|
||||||
webpage, 'title', group='title')
|
|
||||||
```
|
|
||||||
|
|
||||||
Note how you tolerate potential changes in the `style` attribute's value or switch from using double quotes to single for `class` attribute:
|
|
||||||
|
|
||||||
The code definitely should not look like:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = self._search_regex(
|
|
||||||
r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>',
|
|
||||||
webpage, 'title', group='title')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Long lines policy
|
|
||||||
|
|
||||||
There is a soft limit to keep lines of code under 80 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse.
|
|
||||||
|
|
||||||
For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit:
|
|
||||||
|
|
||||||
Correct:
|
|
||||||
|
|
||||||
```python
|
|
||||||
'https://www.youtube.com/watch?v=FqZTN594JQw&list=PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4'
|
|
||||||
```
|
|
||||||
|
|
||||||
Incorrect:
|
|
||||||
|
|
||||||
```python
|
|
||||||
'https://www.youtube.com/watch?v=FqZTN594JQw&list='
|
|
||||||
'PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inline values
|
|
||||||
|
|
||||||
Extracting variables is acceptable for reducing code duplication and improving readability of complex expressions. However, you should avoid extracting variables used only once and moving them to opposite parts of the extractor file, which makes reading the linear flow difficult.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Correct:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = self._html_search_regex(r'<title>([^<]+)</title>', webpage, 'title')
|
|
||||||
```
|
|
||||||
|
|
||||||
Incorrect:
|
|
||||||
|
|
||||||
```python
|
|
||||||
TITLE_RE = r'<title>([^<]+)</title>'
|
|
||||||
# ...some lines of code...
|
|
||||||
title = self._html_search_regex(TITLE_RE, webpage, 'title')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Collapse fallbacks
|
|
||||||
|
|
||||||
Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Good:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = self._html_search_meta(
|
|
||||||
['og:description', 'description', 'twitter:description'],
|
|
||||||
webpage, 'description', default=None)
|
|
||||||
```
|
|
||||||
|
|
||||||
Unwieldy:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = (
|
|
||||||
self._og_search_description(webpage, default=None)
|
|
||||||
or self._html_search_meta('description', webpage, default=None)
|
|
||||||
or self._html_search_meta('twitter:description', webpage, default=None))
|
|
||||||
```
|
|
||||||
|
|
||||||
Methods supporting list of patterns are: `_search_regex`, `_html_search_regex`, `_og_search_property`, `_html_search_meta`.
|
|
||||||
|
|
||||||
### Trailing parentheses
|
|
||||||
|
|
||||||
Always move trailing parentheses after the last argument.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Correct:
|
|
||||||
|
|
||||||
```python
|
|
||||||
lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'],
|
|
||||||
list)
|
|
||||||
```
|
|
||||||
|
|
||||||
Incorrect:
|
|
||||||
|
|
||||||
```python
|
|
||||||
lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'],
|
|
||||||
list,
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use convenience conversion and parsing functions
|
|
||||||
|
|
||||||
Wrap all extracted numeric data into safe functions from [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
|
|
||||||
|
|
||||||
Use `url_or_none` for safe URL processing.
|
|
||||||
|
|
||||||
Use `try_get` for safe metadata extraction from parsed JSON.
|
|
||||||
|
|
||||||
Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
|
|
||||||
|
|
||||||
Explore [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions.
|
|
||||||
|
|
||||||
#### More examples
|
|
||||||
|
|
||||||
##### Safely extract optional description from parsed JSON
|
|
||||||
```python
|
|
||||||
description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str)
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Safely extract more optional metadata
|
|
||||||
```python
|
|
||||||
video = try_get(response, lambda x: x['result']['video'][0], dict) or {}
|
|
||||||
description = video.get('summary')
|
|
||||||
duration = float_or_none(video.get('durationMs'), scale=1000)
|
|
||||||
view_count = int_or_none(video.get('views'))
|
|
||||||
```
|
|
||||||
|
|
1
LATEST_VERSION
Normal file
1
LATEST_VERSION
Normal file
@ -0,0 +1 @@
|
|||||||
|
2012.12.99
|
@ -1,9 +1,5 @@
|
|||||||
include README.md
|
include README.md
|
||||||
include LICENSE
|
include test/*.py
|
||||||
include AUTHORS
|
include test/*.json
|
||||||
include ChangeLog
|
|
||||||
include youtube-dl.bash-completion
|
include youtube-dl.bash-completion
|
||||||
include youtube-dl.fish
|
|
||||||
include youtube-dl.1
|
include youtube-dl.1
|
||||||
recursive-include docs Makefile conf.py *.rst
|
|
||||||
recursive-include test *
|
|
||||||
|
122
Makefile
122
Makefile
@ -1,135 +1,79 @@
|
|||||||
all: youtube-dl README.md CONTRIBUTING.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish supportedsites
|
all: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion
|
||||||
|
|
||||||
clean:
|
clean:
|
||||||
rm -rf youtube-dl.1.temp.md youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dl.tar.gz youtube-dl.zsh youtube-dl.fish youtube_dl/extractor/lazy_extractors.py *.dump *.part* *.ytdl *.info.json *.mp4 *.m4a *.flv *.mp3 *.avi *.mkv *.webm *.3gp *.wav *.ape *.swf *.jpg *.png CONTRIBUTING.md.tmp youtube-dl youtube-dl.exe
|
rm -rf youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dl.tar.gz
|
||||||
find . -name "*.pyc" -delete
|
|
||||||
find . -name "*.class" -delete
|
|
||||||
|
|
||||||
PREFIX ?= /usr/local
|
cleanall: clean
|
||||||
BINDIR ?= $(PREFIX)/bin
|
rm -f youtube-dl youtube-dl.exe
|
||||||
MANDIR ?= $(PREFIX)/man
|
|
||||||
SHAREDIR ?= $(PREFIX)/share
|
PREFIX=/usr/local
|
||||||
PYTHON ?= /usr/bin/env python
|
BINDIR=$(PREFIX)/bin
|
||||||
|
MANDIR=$(PREFIX)/man
|
||||||
|
PYTHON=/usr/bin/env python
|
||||||
|
|
||||||
# set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local
|
# set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local
|
||||||
SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then echo /etc; else echo $(PREFIX)/etc; fi)
|
ifeq ($(PREFIX),/usr)
|
||||||
|
SYSCONFDIR=/etc
|
||||||
|
else
|
||||||
|
ifeq ($(PREFIX),/usr/local)
|
||||||
|
SYSCONFDIR=/etc
|
||||||
|
else
|
||||||
|
SYSCONFDIR=$(PREFIX)/etc
|
||||||
|
endif
|
||||||
|
endif
|
||||||
|
|
||||||
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
|
install: youtube-dl youtube-dl.1 youtube-dl.bash-completion
|
||||||
MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
|
|
||||||
|
|
||||||
install: youtube-dl youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish
|
|
||||||
install -d $(DESTDIR)$(BINDIR)
|
install -d $(DESTDIR)$(BINDIR)
|
||||||
install -m 755 youtube-dl $(DESTDIR)$(BINDIR)
|
install -m 755 youtube-dl $(DESTDIR)$(BINDIR)
|
||||||
install -d $(DESTDIR)$(MANDIR)/man1
|
install -d $(DESTDIR)$(MANDIR)/man1
|
||||||
install -m 644 youtube-dl.1 $(DESTDIR)$(MANDIR)/man1
|
install -m 644 youtube-dl.1 $(DESTDIR)$(MANDIR)/man1
|
||||||
install -d $(DESTDIR)$(SYSCONFDIR)/bash_completion.d
|
install -d $(DESTDIR)$(SYSCONFDIR)/bash_completion.d
|
||||||
install -m 644 youtube-dl.bash-completion $(DESTDIR)$(SYSCONFDIR)/bash_completion.d/youtube-dl
|
install -m 644 youtube-dl.bash-completion $(DESTDIR)$(SYSCONFDIR)/bash_completion.d/youtube-dl
|
||||||
install -d $(DESTDIR)$(SHAREDIR)/zsh/site-functions
|
|
||||||
install -m 644 youtube-dl.zsh $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_youtube-dl
|
|
||||||
install -d $(DESTDIR)$(SYSCONFDIR)/fish/completions
|
|
||||||
install -m 644 youtube-dl.fish $(DESTDIR)$(SYSCONFDIR)/fish/completions/youtube-dl.fish
|
|
||||||
|
|
||||||
codetest:
|
|
||||||
flake8 .
|
|
||||||
|
|
||||||
test:
|
test:
|
||||||
#nosetests --with-coverage --cover-package=youtube_dl --cover-html --verbose --processes 4 test
|
#nosetests --with-coverage --cover-package=youtube_dl --cover-html --verbose --processes 4 test
|
||||||
nosetests --verbose test
|
nosetests --verbose test
|
||||||
$(MAKE) codetest
|
|
||||||
|
|
||||||
ot: offlinetest
|
|
||||||
|
|
||||||
# Keep this list in sync with devscripts/run_tests.sh
|
|
||||||
offlinetest: codetest
|
|
||||||
$(PYTHON) -m nose --verbose test \
|
|
||||||
--exclude test_age_restriction.py \
|
|
||||||
--exclude test_download.py \
|
|
||||||
--exclude test_iqiyi_sdk_interpreter.py \
|
|
||||||
--exclude test_socks.py \
|
|
||||||
--exclude test_subtitles.py \
|
|
||||||
--exclude test_write_annotations.py \
|
|
||||||
--exclude test_youtube_lists.py \
|
|
||||||
--exclude test_youtube_signature.py
|
|
||||||
|
|
||||||
tar: youtube-dl.tar.gz
|
tar: youtube-dl.tar.gz
|
||||||
|
|
||||||
.PHONY: all clean install test tar bash-completion pypi-files zsh-completion fish-completion ot offlinetest codetest supportedsites
|
.PHONY: all clean install test tar bash-completion pypi-files
|
||||||
|
|
||||||
pypi-files: youtube-dl.bash-completion README.txt youtube-dl.1 youtube-dl.fish
|
pypi-files: youtube-dl.bash-completion README.txt youtube-dl.1
|
||||||
|
|
||||||
youtube-dl: youtube_dl/*.py youtube_dl/*/*.py
|
youtube-dl: youtube_dl/*.py youtube_dl/*/*.py
|
||||||
mkdir -p zip
|
zip --quiet youtube-dl youtube_dl/*.py youtube_dl/*/*.py
|
||||||
for d in youtube_dl youtube_dl/downloader youtube_dl/extractor youtube_dl/postprocessor ; do \
|
zip --quiet --junk-paths youtube-dl youtube_dl/__main__.py
|
||||||
mkdir -p zip/$$d ;\
|
|
||||||
cp -pPR $$d/*.py zip/$$d/ ;\
|
|
||||||
done
|
|
||||||
touch -t 200001010101 zip/youtube_dl/*.py zip/youtube_dl/*/*.py
|
|
||||||
mv zip/youtube_dl/__main__.py zip/
|
|
||||||
cd zip ; zip -q ../youtube-dl youtube_dl/*.py youtube_dl/*/*.py __main__.py
|
|
||||||
rm -rf zip
|
|
||||||
echo '#!$(PYTHON)' > youtube-dl
|
echo '#!$(PYTHON)' > youtube-dl
|
||||||
cat youtube-dl.zip >> youtube-dl
|
cat youtube-dl.zip >> youtube-dl
|
||||||
rm youtube-dl.zip
|
rm youtube-dl.zip
|
||||||
chmod a+x youtube-dl
|
chmod a+x youtube-dl
|
||||||
|
|
||||||
README.md: youtube_dl/*.py youtube_dl/*/*.py
|
README.md: youtube_dl/*.py youtube_dl/*/*.py
|
||||||
COLUMNS=80 $(PYTHON) youtube_dl/__main__.py --help | $(PYTHON) devscripts/make_readme.py
|
COLUMNS=80 python -m youtube_dl --help | python devscripts/make_readme.py
|
||||||
|
|
||||||
CONTRIBUTING.md: README.md
|
|
||||||
$(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md
|
|
||||||
|
|
||||||
issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md youtube_dl/version.py
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE/1_broken_site.md
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE/2_site_support_request.md
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE/4_bug_report.md
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md .github/ISSUE_TEMPLATE/5_feature_request.md
|
|
||||||
|
|
||||||
supportedsites:
|
|
||||||
$(PYTHON) devscripts/make_supportedsites.py docs/supportedsites.md
|
|
||||||
|
|
||||||
README.txt: README.md
|
README.txt: README.md
|
||||||
pandoc -f $(MARKDOWN) -t plain README.md -o README.txt
|
pandoc -f markdown -t plain README.md -o README.txt
|
||||||
|
|
||||||
youtube-dl.1: README.md
|
youtube-dl.1: README.md
|
||||||
$(PYTHON) devscripts/prepare_manpage.py youtube-dl.1.temp.md
|
pandoc -s -f markdown -t man README.md -o youtube-dl.1
|
||||||
pandoc -s -f $(MARKDOWN) -t man youtube-dl.1.temp.md -o youtube-dl.1
|
|
||||||
rm -f youtube-dl.1.temp.md
|
|
||||||
|
|
||||||
youtube-dl.bash-completion: youtube_dl/*.py youtube_dl/*/*.py devscripts/bash-completion.in
|
youtube-dl.bash-completion: youtube_dl/*.py youtube_dl/*/*.py devscripts/bash-completion.in
|
||||||
$(PYTHON) devscripts/bash-completion.py
|
python devscripts/bash-completion.py
|
||||||
|
|
||||||
bash-completion: youtube-dl.bash-completion
|
bash-completion: youtube-dl.bash-completion
|
||||||
|
|
||||||
youtube-dl.zsh: youtube_dl/*.py youtube_dl/*/*.py devscripts/zsh-completion.in
|
youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion
|
||||||
$(PYTHON) devscripts/zsh-completion.py
|
|
||||||
|
|
||||||
zsh-completion: youtube-dl.zsh
|
|
||||||
|
|
||||||
youtube-dl.fish: youtube_dl/*.py youtube_dl/*/*.py devscripts/fish-completion.in
|
|
||||||
$(PYTHON) devscripts/fish-completion.py
|
|
||||||
|
|
||||||
fish-completion: youtube-dl.fish
|
|
||||||
|
|
||||||
lazy-extractors: youtube_dl/extractor/lazy_extractors.py
|
|
||||||
|
|
||||||
_EXTRACTOR_FILES = $(shell find youtube_dl/extractor -iname '*.py' -and -not -iname 'lazy_extractors.py')
|
|
||||||
youtube_dl/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
|
|
||||||
$(PYTHON) devscripts/make_lazy_extractors.py $@
|
|
||||||
|
|
||||||
youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish ChangeLog AUTHORS
|
|
||||||
@tar -czf youtube-dl.tar.gz --transform "s|^|youtube-dl/|" --owner 0 --group 0 \
|
@tar -czf youtube-dl.tar.gz --transform "s|^|youtube-dl/|" --owner 0 --group 0 \
|
||||||
--exclude '*.DS_Store' \
|
--exclude '*.DS_Store' \
|
||||||
--exclude '*.kate-swp' \
|
--exclude '*.kate-swp' \
|
||||||
--exclude '*.pyc' \
|
--exclude '*.pyc' \
|
||||||
--exclude '*.pyo' \
|
--exclude '*.pyo' \
|
||||||
--exclude '*~' \
|
--exclude '*~' \
|
||||||
--exclude '__pycache__' \
|
--exclude '__pycache' \
|
||||||
--exclude '.git' \
|
--exclude '.git' \
|
||||||
--exclude 'docs/_build' \
|
--exclude 'testdata' \
|
||||||
-- \
|
-- \
|
||||||
bin devscripts test youtube_dl docs \
|
bin devscripts test youtube_dl \
|
||||||
ChangeLog AUTHORS LICENSE README.md README.txt \
|
CHANGELOG LICENSE README.md README.txt \
|
||||||
Makefile MANIFEST.in youtube-dl.1 youtube-dl.bash-completion \
|
Makefile MANIFEST.in youtube-dl.1 youtube-dl.bash-completion setup.py \
|
||||||
youtube-dl.zsh youtube-dl.fish setup.py setup.cfg \
|
|
||||||
youtube-dl
|
youtube-dl
|
||||||
|
@ -1,21 +1,10 @@
|
|||||||
__youtube_dl()
|
__youtube_dl()
|
||||||
{
|
{
|
||||||
local cur prev opts fileopts diropts keywords
|
local cur prev opts
|
||||||
COMPREPLY=()
|
COMPREPLY=()
|
||||||
cur="${COMP_WORDS[COMP_CWORD]}"
|
cur="${COMP_WORDS[COMP_CWORD]}"
|
||||||
prev="${COMP_WORDS[COMP_CWORD-1]}"
|
|
||||||
opts="{{flags}}"
|
opts="{{flags}}"
|
||||||
keywords=":ytfavorites :ytrecommended :ytsubscriptions :ytwatchlater :ythistory"
|
keywords=":ytfavorites :ytrecommended :ytsubscriptions :ytwatchlater"
|
||||||
fileopts="-a|--batch-file|--download-archive|--cookies|--load-info"
|
|
||||||
diropts="--cache-dir"
|
|
||||||
|
|
||||||
if [[ ${prev} =~ ${fileopts} ]]; then
|
|
||||||
COMPREPLY=( $(compgen -f -- ${cur}) )
|
|
||||||
return 0
|
|
||||||
elif [[ ${prev} =~ ${diropts} ]]; then
|
|
||||||
COMPREPLY=( $(compgen -d -- ${cur}) )
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ ${cur} =~ : ]]; then
|
if [[ ${cur} =~ : ]]; then
|
||||||
COMPREPLY=( $(compgen -W "${keywords}" -- ${cur}) )
|
COMPREPLY=( $(compgen -W "${keywords}" -- ${cur}) )
|
||||||
|
@ -1,30 +1,26 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
import os
|
||||||
from os.path import dirname as dirn
|
from os.path import dirname as dirn
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
|
sys.path.append(dirn(dirn((os.path.abspath(__file__)))))
|
||||||
import youtube_dl
|
import youtube_dl
|
||||||
|
|
||||||
BASH_COMPLETION_FILE = "youtube-dl.bash-completion"
|
BASH_COMPLETION_FILE = "youtube-dl.bash-completion"
|
||||||
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
|
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
|
||||||
|
|
||||||
|
|
||||||
def build_completion(opt_parser):
|
def build_completion(opt_parser):
|
||||||
opts_flag = []
|
opts_flag = []
|
||||||
for group in opt_parser.option_groups:
|
for group in opt_parser.option_groups:
|
||||||
for option in group.option_list:
|
for option in group.option_list:
|
||||||
# for every long flag
|
#for every long flag
|
||||||
opts_flag.append(option.get_opt_string())
|
opts_flag.append(option.get_opt_string())
|
||||||
with open(BASH_COMPLETION_TEMPLATE) as f:
|
with open(BASH_COMPLETION_TEMPLATE) as f:
|
||||||
template = f.read()
|
template = f.read()
|
||||||
with open(BASH_COMPLETION_FILE, "w") as f:
|
with open(BASH_COMPLETION_FILE, "w") as f:
|
||||||
# just using the special char
|
#just using the special char
|
||||||
filled_template = template.replace("{{flags}}", " ".join(opts_flag))
|
filled_template = template.replace("{{flags}}", " ".join(opts_flag))
|
||||||
f.write(filled_template)
|
f.write(filled_template)
|
||||||
|
|
||||||
|
|
||||||
parser = youtube_dl.parseOpts()[0]
|
parser = youtube_dl.parseOpts()[0]
|
||||||
build_completion(parser)
|
build_completion(parser)
|
||||||
|
@ -1,38 +1,17 @@
|
|||||||
#!/usr/bin/python3
|
#!/usr/bin/python3
|
||||||
|
|
||||||
|
from http.server import HTTPServer, BaseHTTPRequestHandler
|
||||||
|
from socketserver import ThreadingMixIn
|
||||||
import argparse
|
import argparse
|
||||||
import ctypes
|
import ctypes
|
||||||
import functools
|
import functools
|
||||||
import shutil
|
|
||||||
import subprocess
|
|
||||||
import sys
|
import sys
|
||||||
import tempfile
|
|
||||||
import threading
|
import threading
|
||||||
import traceback
|
import traceback
|
||||||
import os.path
|
import os.path
|
||||||
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname((os.path.abspath(__file__)))))
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_input,
|
|
||||||
compat_http_server,
|
|
||||||
compat_str,
|
|
||||||
compat_urlparse,
|
|
||||||
)
|
|
||||||
|
|
||||||
# These are not used outside of buildserver.py thus not in compat.py
|
class BuildHTTPServer(ThreadingMixIn, HTTPServer):
|
||||||
|
|
||||||
try:
|
|
||||||
import winreg as compat_winreg
|
|
||||||
except ImportError: # Python 2
|
|
||||||
import _winreg as compat_winreg
|
|
||||||
|
|
||||||
try:
|
|
||||||
import socketserver as compat_socketserver
|
|
||||||
except ImportError: # Python 2
|
|
||||||
import SocketServer as compat_socketserver
|
|
||||||
|
|
||||||
|
|
||||||
class BuildHTTPServer(compat_socketserver.ThreadingMixIn, compat_http_server.HTTPServer):
|
|
||||||
allow_reuse_address = True
|
allow_reuse_address = True
|
||||||
|
|
||||||
|
|
||||||
@ -163,7 +142,7 @@ def win_service_set_status(handle, status_code):
|
|||||||
|
|
||||||
def win_service_main(service_name, real_main, argc, argv_raw):
|
def win_service_main(service_name, real_main, argc, argv_raw):
|
||||||
try:
|
try:
|
||||||
# args = [argv_raw[i].value for i in range(argc)]
|
#args = [argv_raw[i].value for i in range(argc)]
|
||||||
stop_event = threading.Event()
|
stop_event = threading.Event()
|
||||||
handler = HandlerEx(functools.partial(stop_event, win_service_handler))
|
handler = HandlerEx(functools.partial(stop_event, win_service_handler))
|
||||||
h = advapi32.RegisterServiceCtrlHandlerExW(service_name, handler, None)
|
h = advapi32.RegisterServiceCtrlHandlerExW(service_name, handler, None)
|
||||||
@ -212,7 +191,7 @@ def main(args=None):
|
|||||||
action='store_const', dest='action', const='service',
|
action='store_const', dest='action', const='service',
|
||||||
help='Run as a Windows service')
|
help='Run as a Windows service')
|
||||||
parser.add_argument('-b', '--bind', metavar='<host:port>',
|
parser.add_argument('-b', '--bind', metavar='<host:port>',
|
||||||
action='store', default='0.0.0.0:8142',
|
action='store', default='localhost:8142',
|
||||||
help='Bind to host:port (default %default)')
|
help='Bind to host:port (default %default)')
|
||||||
options = parser.parse_args(args=args)
|
options = parser.parse_args(args=args)
|
||||||
|
|
||||||
@ -237,7 +216,7 @@ def main(args=None):
|
|||||||
srv = BuildHTTPServer((host, port), BuildHTTPRequestHandler)
|
srv = BuildHTTPServer((host, port), BuildHTTPRequestHandler)
|
||||||
thr = threading.Thread(target=srv.serve_forever)
|
thr = threading.Thread(target=srv.serve_forever)
|
||||||
thr.start()
|
thr.start()
|
||||||
compat_input('Press ENTER to shut down')
|
input('Press ENTER to shut down')
|
||||||
srv.shutdown()
|
srv.shutdown()
|
||||||
thr.join()
|
thr.join()
|
||||||
|
|
||||||
@ -252,6 +231,7 @@ def rmtree(path):
|
|||||||
os.remove(fname)
|
os.remove(fname)
|
||||||
os.rmdir(path)
|
os.rmdir(path)
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
class BuildError(Exception):
|
class BuildError(Exception):
|
||||||
def __init__(self, output, code=500):
|
def __init__(self, output, code=500):
|
||||||
@ -268,25 +248,15 @@ class HTTPError(BuildError):
|
|||||||
|
|
||||||
class PythonBuilder(object):
|
class PythonBuilder(object):
|
||||||
def __init__(self, **kwargs):
|
def __init__(self, **kwargs):
|
||||||
python_version = kwargs.pop('python', '3.4')
|
pythonVersion = kwargs.pop('python', '2.7')
|
||||||
python_path = None
|
try:
|
||||||
for node in ('Wow6432Node\\', ''):
|
key = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, r'SOFTWARE\Python\PythonCore\%s\InstallPath' % pythonVersion)
|
||||||
try:
|
try:
|
||||||
key = compat_winreg.OpenKey(
|
self.pythonPath, _ = _winreg.QueryValueEx(key, '')
|
||||||
compat_winreg.HKEY_LOCAL_MACHINE,
|
finally:
|
||||||
r'SOFTWARE\%sPython\PythonCore\%s\InstallPath' % (node, python_version))
|
_winreg.CloseKey(key)
|
||||||
try:
|
except Exception:
|
||||||
python_path, _ = compat_winreg.QueryValueEx(key, '')
|
raise BuildError('No such Python version: %s' % pythonVersion)
|
||||||
finally:
|
|
||||||
compat_winreg.CloseKey(key)
|
|
||||||
break
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if not python_path:
|
|
||||||
raise BuildError('No such Python version: %s' % python_version)
|
|
||||||
|
|
||||||
self.pythonPath = python_path
|
|
||||||
|
|
||||||
super(PythonBuilder, self).__init__(**kwargs)
|
super(PythonBuilder, self).__init__(**kwargs)
|
||||||
|
|
||||||
@ -322,7 +292,7 @@ class GITBuilder(GITInfoBuilder):
|
|||||||
|
|
||||||
|
|
||||||
class YoutubeDLBuilder(object):
|
class YoutubeDLBuilder(object):
|
||||||
authorizedUsers = ['fraca7', 'phihag', 'rg3', 'FiloSottile', 'ytdl-org']
|
authorizedUsers = ['fraca7', 'phihag', 'rg3', 'FiloSottile']
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
def __init__(self, **kwargs):
|
||||||
if self.repoName != 'youtube-dl':
|
if self.repoName != 'youtube-dl':
|
||||||
@ -334,10 +304,8 @@ class YoutubeDLBuilder(object):
|
|||||||
|
|
||||||
def build(self):
|
def build(self):
|
||||||
try:
|
try:
|
||||||
proc = subprocess.Popen([os.path.join(self.pythonPath, 'python.exe'), 'setup.py', 'py2exe'], stdin=subprocess.PIPE, cwd=self.buildPath)
|
subprocess.check_output([os.path.join(self.pythonPath, 'python.exe'), 'setup.py', 'py2exe'],
|
||||||
proc.wait()
|
cwd=self.buildPath)
|
||||||
#subprocess.check_output([os.path.join(self.pythonPath, 'python.exe'), 'setup.py', 'py2exe'],
|
|
||||||
# cwd=self.buildPath)
|
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
raise BuildError(e.output)
|
raise BuildError(e.output)
|
||||||
|
|
||||||
@ -400,12 +368,12 @@ class Builder(PythonBuilder, GITBuilder, YoutubeDLBuilder, DownloadBuilder, Clea
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class BuildHTTPRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
class BuildHTTPRequestHandler(BaseHTTPRequestHandler):
|
||||||
actionDict = {'build': Builder, 'download': Builder} # They're the same, no more caching.
|
actionDict = { 'build': Builder, 'download': Builder } # They're the same, no more caching.
|
||||||
|
|
||||||
def do_GET(self):
|
def do_GET(self):
|
||||||
path = compat_urlparse.urlparse(self.path)
|
path = urlparse.urlparse(self.path)
|
||||||
paramDict = dict([(key, value[0]) for key, value in compat_urlparse.parse_qs(path.query).items()])
|
paramDict = dict([(key, value[0]) for key, value in urlparse.parse_qs(path.query).items()])
|
||||||
action, _, path = path.path.strip('/').partition('/')
|
action, _, path = path.path.strip('/').partition('/')
|
||||||
if path:
|
if path:
|
||||||
path = path.split('/')
|
path = path.split('/')
|
||||||
@ -419,15 +387,19 @@ class BuildHTTPRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
|||||||
builder.close()
|
builder.close()
|
||||||
except BuildError as e:
|
except BuildError as e:
|
||||||
self.send_response(e.code)
|
self.send_response(e.code)
|
||||||
msg = compat_str(e).encode('UTF-8')
|
msg = unicode(e).encode('UTF-8')
|
||||||
self.send_header('Content-Type', 'text/plain; charset=UTF-8')
|
self.send_header('Content-Type', 'text/plain; charset=UTF-8')
|
||||||
self.send_header('Content-Length', len(msg))
|
self.send_header('Content-Length', len(msg))
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(msg)
|
self.wfile.write(msg)
|
||||||
|
except HTTPError as e:
|
||||||
|
self.send_response(e.code, str(e))
|
||||||
else:
|
else:
|
||||||
self.send_response(500, 'Unknown build method "%s"' % action)
|
self.send_response(500, 'Unknown build method "%s"' % action)
|
||||||
else:
|
else:
|
||||||
self.send_response(500, 'Malformed URL')
|
self.send_response(500, 'Malformed URL')
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
main()
|
main()
|
||||||
|
@ -1,12 +1,8 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
This script employs a VERY basic heuristic ('porn' in webpage.lower()) to check
|
This script employs a VERY basic heuristic ('porn' in webpage.lower()) to check
|
||||||
if we are not 'age_limit' tagging some porn site
|
if we are not 'age_limit' tagging some porn site
|
||||||
|
|
||||||
A second approach implemented relies on a list of porn domains, to activate it
|
|
||||||
pass the list filename as the only argument
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Allow direct execution
|
# Allow direct execution
|
||||||
@ -14,43 +10,26 @@ import os
|
|||||||
import sys
|
import sys
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
from test.helper import gettestcases
|
from test.helper import get_testcases
|
||||||
from youtube_dl.utils import compat_urllib_parse_urlparse
|
|
||||||
from youtube_dl.utils import compat_urllib_request
|
from youtube_dl.utils import compat_urllib_request
|
||||||
|
|
||||||
if len(sys.argv) > 1:
|
for test in get_testcases():
|
||||||
METHOD = 'LIST'
|
try:
|
||||||
LIST = open(sys.argv[1]).read().decode('utf8').strip()
|
webpage = compat_urllib_request.urlopen(test['url'], timeout=10).read()
|
||||||
else:
|
except:
|
||||||
METHOD = 'EURISTIC'
|
print('\nFail: {0}'.format(test['name']))
|
||||||
|
continue
|
||||||
|
|
||||||
for test in gettestcases():
|
webpage = webpage.decode('utf8', 'replace')
|
||||||
if METHOD == 'EURISTIC':
|
|
||||||
try:
|
|
||||||
webpage = compat_urllib_request.urlopen(test['url'], timeout=10).read()
|
|
||||||
except Exception:
|
|
||||||
print('\nFail: {0}'.format(test['name']))
|
|
||||||
continue
|
|
||||||
|
|
||||||
webpage = webpage.decode('utf8', 'replace')
|
if 'porn' in webpage.lower() and ('info_dict' not in test
|
||||||
|
or 'age_limit' not in test['info_dict']
|
||||||
RESULT = 'porn' in webpage.lower()
|
or test['info_dict']['age_limit'] != 18):
|
||||||
|
|
||||||
elif METHOD == 'LIST':
|
|
||||||
domain = compat_urllib_parse_urlparse(test['url']).netloc
|
|
||||||
if not domain:
|
|
||||||
print('\nFail: {0}'.format(test['name']))
|
|
||||||
continue
|
|
||||||
domain = '.'.join(domain.split('.')[-2:])
|
|
||||||
|
|
||||||
RESULT = ('.' + domain + '\n' in LIST or '\n' + domain + '\n' in LIST)
|
|
||||||
|
|
||||||
if RESULT and ('info_dict' not in test or 'age_limit' not in test['info_dict']
|
|
||||||
or test['info_dict']['age_limit'] != 18):
|
|
||||||
print('\nPotential missing age_limit check: {0}'.format(test['name']))
|
print('\nPotential missing age_limit check: {0}'.format(test['name']))
|
||||||
|
|
||||||
elif not RESULT and ('info_dict' in test and 'age_limit' in test['info_dict']
|
elif 'porn' not in webpage.lower() and ('info_dict' in test and
|
||||||
and test['info_dict']['age_limit'] == 18):
|
'age_limit' in test['info_dict'] and
|
||||||
|
test['info_dict']['age_limit'] == 18):
|
||||||
print('\nPotential false negative: {0}'.format(test['name']))
|
print('\nPotential false negative: {0}'.format(test['name']))
|
||||||
|
|
||||||
else:
|
else:
|
||||||
|
@ -1,110 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import json
|
|
||||||
import mimetypes
|
|
||||||
import netrc
|
|
||||||
import optparse
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_basestring,
|
|
||||||
compat_getpass,
|
|
||||||
compat_print,
|
|
||||||
compat_urllib_request,
|
|
||||||
)
|
|
||||||
from youtube_dl.utils import (
|
|
||||||
make_HTTPS_handler,
|
|
||||||
sanitized_Request,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class GitHubReleaser(object):
|
|
||||||
_API_URL = 'https://api.github.com/repos/ytdl-org/youtube-dl/releases'
|
|
||||||
_UPLOADS_URL = 'https://uploads.github.com/repos/ytdl-org/youtube-dl/releases/%s/assets?name=%s'
|
|
||||||
_NETRC_MACHINE = 'github.com'
|
|
||||||
|
|
||||||
def __init__(self, debuglevel=0):
|
|
||||||
self._init_github_account()
|
|
||||||
https_handler = make_HTTPS_handler({}, debuglevel=debuglevel)
|
|
||||||
self._opener = compat_urllib_request.build_opener(https_handler)
|
|
||||||
|
|
||||||
def _init_github_account(self):
|
|
||||||
try:
|
|
||||||
info = netrc.netrc().authenticators(self._NETRC_MACHINE)
|
|
||||||
if info is not None:
|
|
||||||
self._token = info[2]
|
|
||||||
compat_print('Using GitHub credentials found in .netrc...')
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
compat_print('No GitHub credentials found in .netrc')
|
|
||||||
except (IOError, netrc.NetrcParseError):
|
|
||||||
compat_print('Unable to parse .netrc')
|
|
||||||
self._token = compat_getpass(
|
|
||||||
'Type your GitHub PAT (personal access token) and press [Return]: ')
|
|
||||||
|
|
||||||
def _call(self, req):
|
|
||||||
if isinstance(req, compat_basestring):
|
|
||||||
req = sanitized_Request(req)
|
|
||||||
req.add_header('Authorization', 'token %s' % self._token)
|
|
||||||
response = self._opener.open(req).read().decode('utf-8')
|
|
||||||
return json.loads(response)
|
|
||||||
|
|
||||||
def list_releases(self):
|
|
||||||
return self._call(self._API_URL)
|
|
||||||
|
|
||||||
def create_release(self, tag_name, name=None, body='', draft=False, prerelease=False):
|
|
||||||
data = {
|
|
||||||
'tag_name': tag_name,
|
|
||||||
'target_commitish': 'master',
|
|
||||||
'name': name,
|
|
||||||
'body': body,
|
|
||||||
'draft': draft,
|
|
||||||
'prerelease': prerelease,
|
|
||||||
}
|
|
||||||
req = sanitized_Request(self._API_URL, json.dumps(data).encode('utf-8'))
|
|
||||||
return self._call(req)
|
|
||||||
|
|
||||||
def create_asset(self, release_id, asset):
|
|
||||||
asset_name = os.path.basename(asset)
|
|
||||||
url = self._UPLOADS_URL % (release_id, asset_name)
|
|
||||||
# Our files are small enough to be loaded directly into memory.
|
|
||||||
data = open(asset, 'rb').read()
|
|
||||||
req = sanitized_Request(url, data)
|
|
||||||
mime_type, _ = mimetypes.guess_type(asset_name)
|
|
||||||
req.add_header('Content-Type', mime_type or 'application/octet-stream')
|
|
||||||
return self._call(req)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog CHANGELOG VERSION BUILDPATH')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 3:
|
|
||||||
parser.error('Expected a version and a build directory')
|
|
||||||
|
|
||||||
changelog_file, version, build_path = args
|
|
||||||
|
|
||||||
with io.open(changelog_file, encoding='utf-8') as inf:
|
|
||||||
changelog = inf.read()
|
|
||||||
|
|
||||||
mobj = re.search(r'(?s)version %s\n{2}(.+?)\n{3}' % version, changelog)
|
|
||||||
body = mobj.group(1) if mobj else ''
|
|
||||||
|
|
||||||
releaser = GitHubReleaser()
|
|
||||||
|
|
||||||
new_release = releaser.create_release(
|
|
||||||
version, name='youtube-dl %s' % version, body=body)
|
|
||||||
release_id = new_release['id']
|
|
||||||
|
|
||||||
for asset in os.listdir(build_path):
|
|
||||||
compat_print('Uploading %s...' % asset)
|
|
||||||
releaser.create_asset(release_id, os.path.join(build_path, asset))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
@ -1,5 +0,0 @@
|
|||||||
|
|
||||||
{{commands}}
|
|
||||||
|
|
||||||
|
|
||||||
complete --command youtube-dl --arguments ":ytfavorites :ytrecommended :ytsubscriptions :ytwatchlater :ythistory"
|
|
@ -1,49 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import optparse
|
|
||||||
import os
|
|
||||||
from os.path import dirname as dirn
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
|
|
||||||
import youtube_dl
|
|
||||||
from youtube_dl.utils import shell_quote
|
|
||||||
|
|
||||||
FISH_COMPLETION_FILE = 'youtube-dl.fish'
|
|
||||||
FISH_COMPLETION_TEMPLATE = 'devscripts/fish-completion.in'
|
|
||||||
|
|
||||||
EXTRA_ARGS = {
|
|
||||||
'recode-video': ['--arguments', 'mp4 flv ogg webm mkv', '--exclusive'],
|
|
||||||
|
|
||||||
# Options that need a file parameter
|
|
||||||
'download-archive': ['--require-parameter'],
|
|
||||||
'cookies': ['--require-parameter'],
|
|
||||||
'load-info': ['--require-parameter'],
|
|
||||||
'batch-file': ['--require-parameter'],
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def build_completion(opt_parser):
|
|
||||||
commands = []
|
|
||||||
|
|
||||||
for group in opt_parser.option_groups:
|
|
||||||
for option in group.option_list:
|
|
||||||
long_option = option.get_opt_string().strip('-')
|
|
||||||
complete_cmd = ['complete', '--command', 'youtube-dl', '--long-option', long_option]
|
|
||||||
if option._short_opts:
|
|
||||||
complete_cmd += ['--short-option', option._short_opts[0].strip('-')]
|
|
||||||
if option.help != optparse.SUPPRESS_HELP:
|
|
||||||
complete_cmd += ['--description', option.help]
|
|
||||||
complete_cmd.extend(EXTRA_ARGS.get(long_option, []))
|
|
||||||
commands.append(shell_quote(complete_cmd))
|
|
||||||
|
|
||||||
with open(FISH_COMPLETION_TEMPLATE) as f:
|
|
||||||
template = f.read()
|
|
||||||
filled_template = template.replace('{{commands}}', '\n'.join(commands))
|
|
||||||
with open(FISH_COMPLETION_FILE, 'w') as f:
|
|
||||||
f.write(filled_template)
|
|
||||||
|
|
||||||
|
|
||||||
parser = youtube_dl.parseOpts()[0]
|
|
||||||
build_completion(parser)
|
|
@ -1,43 +0,0 @@
|
|||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import codecs
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.utils import intlist_to_bytes
|
|
||||||
from youtube_dl.aes import aes_encrypt, key_expansion
|
|
||||||
|
|
||||||
secret_msg = b'Secret message goes here'
|
|
||||||
|
|
||||||
|
|
||||||
def hex_str(int_list):
|
|
||||||
return codecs.encode(intlist_to_bytes(int_list), 'hex')
|
|
||||||
|
|
||||||
|
|
||||||
def openssl_encode(algo, key, iv):
|
|
||||||
cmd = ['openssl', 'enc', '-e', '-' + algo, '-K', hex_str(key), '-iv', hex_str(iv)]
|
|
||||||
prog = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
|
|
||||||
out, _ = prog.communicate(secret_msg)
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
iv = key = [0x20, 0x15] + 14 * [0]
|
|
||||||
|
|
||||||
r = openssl_encode('aes-128-cbc', key, iv)
|
|
||||||
print('aes_cbc_decrypt')
|
|
||||||
print(repr(r))
|
|
||||||
|
|
||||||
password = key
|
|
||||||
new_key = aes_encrypt(password, key_expansion(password))
|
|
||||||
r = openssl_encode('aes-128-ctr', new_key, iv)
|
|
||||||
print('aes_decrypt_text 16')
|
|
||||||
print(repr(r))
|
|
||||||
|
|
||||||
password = key + 16 * [0]
|
|
||||||
new_key = aes_encrypt(password, key_expansion(password)) * (32 // 16)
|
|
||||||
r = openssl_encode('aes-256-ctr', new_key, iv)
|
|
||||||
print('aes_decrypt_text 32')
|
|
||||||
print(repr(r))
|
|
@ -1,5 +1,4 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import sys
|
import sys
|
||||||
|
@ -1,22 +1,32 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
from __future__ import unicode_literals
|
import hashlib
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
import urllib.request
|
||||||
import json
|
import json
|
||||||
|
|
||||||
versions_info = json.load(open('update/versions.json'))
|
versions_info = json.load(open('update/versions.json'))
|
||||||
version = versions_info['latest']
|
version = versions_info['latest']
|
||||||
version_dict = versions_info['versions'][version]
|
URL = versions_info['versions'][version]['bin'][0]
|
||||||
|
|
||||||
|
data = urllib.request.urlopen(URL).read()
|
||||||
|
|
||||||
# Read template page
|
# Read template page
|
||||||
with open('download.html.in', 'r', encoding='utf-8') as tmplf:
|
with open('download.html.in', 'r', encoding='utf-8') as tmplf:
|
||||||
template = tmplf.read()
|
template = tmplf.read()
|
||||||
|
|
||||||
|
md5sum = hashlib.md5(data).hexdigest()
|
||||||
|
sha1sum = hashlib.sha1(data).hexdigest()
|
||||||
|
sha256sum = hashlib.sha256(data).hexdigest()
|
||||||
template = template.replace('@PROGRAM_VERSION@', version)
|
template = template.replace('@PROGRAM_VERSION@', version)
|
||||||
template = template.replace('@PROGRAM_URL@', version_dict['bin'][0])
|
template = template.replace('@PROGRAM_URL@', URL)
|
||||||
template = template.replace('@PROGRAM_SHA256SUM@', version_dict['bin'][1])
|
template = template.replace('@PROGRAM_MD5SUM@', md5sum)
|
||||||
template = template.replace('@EXE_URL@', version_dict['exe'][0])
|
template = template.replace('@PROGRAM_SHA1SUM@', sha1sum)
|
||||||
template = template.replace('@EXE_SHA256SUM@', version_dict['exe'][1])
|
template = template.replace('@PROGRAM_SHA256SUM@', sha256sum)
|
||||||
template = template.replace('@TAR_URL@', version_dict['tar'][0])
|
template = template.replace('@EXE_URL@', versions_info['versions'][version]['exe'][0])
|
||||||
template = template.replace('@TAR_SHA256SUM@', version_dict['tar'][1])
|
template = template.replace('@EXE_SHA256SUM@', versions_info['versions'][version]['exe'][1])
|
||||||
|
template = template.replace('@TAR_URL@', versions_info['versions'][version]['tar'][0])
|
||||||
|
template = template.replace('@TAR_SHA256SUM@', versions_info['versions'][version]['tar'][1])
|
||||||
with open('download.html', 'w', encoding='utf-8') as dlf:
|
with open('download.html', 'w', encoding='utf-8') as dlf:
|
||||||
dlf.write(template)
|
dlf.write(template)
|
||||||
|
@ -1,5 +1,4 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
from __future__ import unicode_literals, with_statement
|
|
||||||
|
|
||||||
import rsa
|
import rsa
|
||||||
import json
|
import json
|
||||||
@ -12,23 +11,22 @@ except NameError:
|
|||||||
|
|
||||||
versions_info = json.load(open('update/versions.json'))
|
versions_info = json.load(open('update/versions.json'))
|
||||||
if 'signature' in versions_info:
|
if 'signature' in versions_info:
|
||||||
del versions_info['signature']
|
del versions_info['signature']
|
||||||
|
|
||||||
print('Enter the PKCS1 private key, followed by a blank line:')
|
print('Enter the PKCS1 private key, followed by a blank line:')
|
||||||
privkey = b''
|
privkey = b''
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
line = input()
|
line = input()
|
||||||
except EOFError:
|
except EOFError:
|
||||||
break
|
break
|
||||||
if line == '':
|
if line == '':
|
||||||
break
|
break
|
||||||
privkey += line.encode('ascii') + b'\n'
|
privkey += line.encode('ascii') + b'\n'
|
||||||
privkey = rsa.PrivateKey.load_pkcs1(privkey)
|
privkey = rsa.PrivateKey.load_pkcs1(privkey)
|
||||||
|
|
||||||
signature = hexlify(rsa.pkcs1.sign(json.dumps(versions_info, sort_keys=True).encode('utf-8'), privkey, 'SHA-256')).decode()
|
signature = hexlify(rsa.pkcs1.sign(json.dumps(versions_info, sort_keys=True).encode('utf-8'), privkey, 'SHA-256')).decode()
|
||||||
print('signature: ' + signature)
|
print('signature: ' + signature)
|
||||||
|
|
||||||
versions_info['signature'] = signature
|
versions_info['signature'] = signature
|
||||||
with open('update/versions.json', 'w') as versionsf:
|
json.dump(versions_info, open('update/versions.json', 'w'), indent=4, sort_keys=True)
|
||||||
json.dump(versions_info, versionsf, indent=4, sort_keys=True)
|
|
@ -1,11 +1,11 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
from __future__ import with_statement, unicode_literals
|
from __future__ import with_statement
|
||||||
|
|
||||||
import datetime
|
import datetime
|
||||||
import glob
|
import glob
|
||||||
import io # For Python 2 compatibility
|
import io # For Python 2 compatibilty
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
|
|
||||||
@ -13,7 +13,7 @@ year = str(datetime.datetime.now().year)
|
|||||||
for fn in glob.glob('*.html*'):
|
for fn in glob.glob('*.html*'):
|
||||||
with io.open(fn, encoding='utf-8') as f:
|
with io.open(fn, encoding='utf-8') as f:
|
||||||
content = f.read()
|
content = f.read()
|
||||||
newc = re.sub(r'(?P<copyright>Copyright © 2011-)(?P<year>[0-9]{4})', 'Copyright © 2011-' + year, content)
|
newc = re.sub(u'(?P<copyright>Copyright © 2006-)(?P<year>[0-9]{4})', u'Copyright © 2006-' + year, content)
|
||||||
if content != newc:
|
if content != newc:
|
||||||
tmpFn = fn + '.part'
|
tmpFn = fn + '.part'
|
||||||
with io.open(tmpFn, 'wt', encoding='utf-8') as outf:
|
with io.open(tmpFn, 'wt', encoding='utf-8') as outf:
|
||||||
|
@ -1,76 +1,56 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import datetime
|
import datetime
|
||||||
import io
|
|
||||||
import json
|
|
||||||
import textwrap
|
import textwrap
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
atom_template = textwrap.dedent("""\
|
atom_template=textwrap.dedent("""\
|
||||||
<?xml version="1.0" encoding="utf-8"?>
|
<?xml version='1.0' encoding='utf-8'?>
|
||||||
<feed xmlns="http://www.w3.org/2005/Atom">
|
<atom:feed xmlns:atom="http://www.w3.org/2005/Atom">
|
||||||
<link rel="self" href="http://ytdl-org.github.io/youtube-dl/update/releases.atom" />
|
<atom:title>youtube-dl releases</atom:title>
|
||||||
<title>youtube-dl releases</title>
|
<atom:id>youtube-dl-updates-feed</atom:id>
|
||||||
<id>https://yt-dl.org/feed/youtube-dl-updates-feed</id>
|
<atom:updated>@TIMESTAMP@</atom:updated>
|
||||||
<updated>@TIMESTAMP@</updated>
|
@ENTRIES@
|
||||||
@ENTRIES@
|
</atom:feed>""")
|
||||||
</feed>""")
|
|
||||||
|
|
||||||
entry_template = textwrap.dedent("""
|
entry_template=textwrap.dedent("""
|
||||||
<entry>
|
<atom:entry>
|
||||||
<id>https://yt-dl.org/feed/youtube-dl-updates-feed/youtube-dl-@VERSION@</id>
|
<atom:id>youtube-dl-@VERSION@</atom:id>
|
||||||
<title>New version @VERSION@</title>
|
<atom:title>New version @VERSION@</atom:title>
|
||||||
<link href="http://ytdl-org.github.io/youtube-dl" />
|
<atom:link href="http://rg3.github.io/youtube-dl" />
|
||||||
<content type="xhtml">
|
<atom:content type="xhtml">
|
||||||
<div xmlns="http://www.w3.org/1999/xhtml">
|
<div xmlns="http://www.w3.org/1999/xhtml">
|
||||||
Downloads available at <a href="https://yt-dl.org/downloads/@VERSION@/">https://yt-dl.org/downloads/@VERSION@/</a>
|
Downloads available at <a href="https://yt-dl.org/downloads/@VERSION@/">https://yt-dl.org/downloads/@VERSION@/</a>
|
||||||
</div>
|
</div>
|
||||||
</content>
|
</atom:content>
|
||||||
<author>
|
<atom:author>
|
||||||
<name>The youtube-dl maintainers</name>
|
<atom:name>The youtube-dl maintainers</atom:name>
|
||||||
</author>
|
</atom:author>
|
||||||
<updated>@TIMESTAMP@</updated>
|
<atom:updated>@TIMESTAMP@</atom:updated>
|
||||||
</entry>
|
</atom:entry>
|
||||||
""")
|
""")
|
||||||
|
|
||||||
now = datetime.datetime.now()
|
now = datetime.datetime.now()
|
||||||
now_iso = now.isoformat() + 'Z'
|
now_iso = now.isoformat()
|
||||||
|
|
||||||
atom_template = atom_template.replace('@TIMESTAMP@', now_iso)
|
atom_template = atom_template.replace('@TIMESTAMP@',now_iso)
|
||||||
|
|
||||||
|
entries=[]
|
||||||
|
|
||||||
versions_info = json.load(open('update/versions.json'))
|
versions_info = json.load(open('update/versions.json'))
|
||||||
versions = list(versions_info['versions'].keys())
|
versions = list(versions_info['versions'].keys())
|
||||||
versions.sort()
|
versions.sort()
|
||||||
|
|
||||||
entries = []
|
|
||||||
for v in versions:
|
for v in versions:
|
||||||
fields = v.split('.')
|
entry = entry_template.replace('@TIMESTAMP@',v.replace('.','-'))
|
||||||
year, month, day = map(int, fields[:3])
|
entry = entry.replace('@VERSION@',v)
|
||||||
faked = 0
|
entries.append(entry)
|
||||||
patchlevel = 0
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
datetime.date(year, month, day)
|
|
||||||
except ValueError:
|
|
||||||
day -= 1
|
|
||||||
faked += 1
|
|
||||||
assert day > 0
|
|
||||||
continue
|
|
||||||
break
|
|
||||||
if len(fields) >= 4:
|
|
||||||
try:
|
|
||||||
patchlevel = int(fields[3])
|
|
||||||
except ValueError:
|
|
||||||
patchlevel = 1
|
|
||||||
timestamp = '%04d-%02d-%02dT00:%02d:%02dZ' % (year, month, day, faked, patchlevel)
|
|
||||||
|
|
||||||
entry = entry_template.replace('@TIMESTAMP@', timestamp)
|
|
||||||
entry = entry.replace('@VERSION@', v)
|
|
||||||
entries.append(entry)
|
|
||||||
|
|
||||||
entries_str = textwrap.indent(''.join(entries), '\t')
|
entries_str = textwrap.indent(''.join(entries), '\t')
|
||||||
atom_template = atom_template.replace('@ENTRIES@', entries_str)
|
atom_template = atom_template.replace('@ENTRIES@', entries_str)
|
||||||
|
|
||||||
with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file:
|
with open('update/releases.atom','w',encoding='utf-8') as atom_file:
|
||||||
atom_file.write(atom_template)
|
atom_file.write(atom_template)
|
||||||
|
|
||||||
|
@ -1,29 +1,27 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import textwrap
|
import textwrap
|
||||||
|
|
||||||
# We must be able to import youtube_dl
|
# We must be able to import youtube_dl
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
|
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
|
||||||
|
|
||||||
import youtube_dl
|
import youtube_dl
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf:
|
with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf:
|
||||||
template = tmplf.read()
|
template = tmplf.read()
|
||||||
|
|
||||||
ie_htmls = []
|
ie_htmls = []
|
||||||
for ie in youtube_dl.list_extractors(age_limit=None):
|
for ie in sorted(youtube_dl.gen_extractors(), key=lambda i: i.IE_NAME.lower()):
|
||||||
ie_html = '<b>{}</b>'.format(ie.IE_NAME)
|
ie_html = '<b>{}</b>'.format(ie.IE_NAME)
|
||||||
ie_desc = getattr(ie, 'IE_DESC', None)
|
ie_desc = getattr(ie, 'IE_DESC', None)
|
||||||
if ie_desc is False:
|
if ie_desc is False:
|
||||||
continue
|
continue
|
||||||
elif ie_desc is not None:
|
elif ie_desc is not None:
|
||||||
ie_html += ': {}'.format(ie.IE_DESC)
|
ie_html += ': {}'.format(ie.IE_DESC)
|
||||||
if not ie.working():
|
if ie.working() == False:
|
||||||
ie_html += ' (Currently broken)'
|
ie_html += ' (Currently broken)'
|
||||||
ie_htmls.append('<li>{}</li>'.format(ie_html))
|
ie_htmls.append('<li>{}</li>'.format(ie_html))
|
||||||
|
|
||||||
@ -32,6 +30,5 @@ def main():
|
|||||||
with open('supportedsites.html', 'w', encoding='utf-8') as sitesf:
|
with open('supportedsites.html', 'w', encoding='utf-8') as sitesf:
|
||||||
sitesf.write(template)
|
sitesf.write(template)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
main()
|
main()
|
||||||
|
@ -1,5 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
wget http://central.maven.org/maven2/org/python/jython-installer/2.7.1/jython-installer-2.7.1.jar
|
|
||||||
java -jar jython-installer-2.7.1.jar -s -d "$HOME/jython"
|
|
||||||
$HOME/jython/bin/jython -m pip install nose
|
|
@ -1,19 +0,0 @@
|
|||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import re
|
|
||||||
|
|
||||||
|
|
||||||
class LazyLoadExtractor(object):
|
|
||||||
_module = None
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def ie_key(cls):
|
|
||||||
return cls.__name__[:-2]
|
|
||||||
|
|
||||||
def __new__(cls, *args, **kwargs):
|
|
||||||
mod = __import__(cls._module, fromlist=(cls.__name__,))
|
|
||||||
real_cls = getattr(mod, cls.__name__)
|
|
||||||
instance = real_cls.__new__(real_cls)
|
|
||||||
instance.__init__(*args, **kwargs)
|
|
||||||
return instance
|
|
@ -1,33 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import optparse
|
|
||||||
import re
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog INFILE OUTFILE')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 2:
|
|
||||||
parser.error('Expected an input and an output filename')
|
|
||||||
|
|
||||||
infile, outfile = args
|
|
||||||
|
|
||||||
with io.open(infile, encoding='utf-8') as inf:
|
|
||||||
readme = inf.read()
|
|
||||||
|
|
||||||
bug_text = re.search(
|
|
||||||
r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1)
|
|
||||||
dev_text = re.search(
|
|
||||||
r'(?s)(#\s*DEVELOPER INSTRUCTIONS.*?)#\s*EMBEDDING YOUTUBE-DL',
|
|
||||||
readme).group(1)
|
|
||||||
|
|
||||||
out = bug_text + dev_text
|
|
||||||
|
|
||||||
with io.open(outfile, 'w', encoding='utf-8') as outf:
|
|
||||||
outf.write(out)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
@ -1,29 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import optparse
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog INFILE OUTFILE')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 2:
|
|
||||||
parser.error('Expected an input and an output filename')
|
|
||||||
|
|
||||||
infile, outfile = args
|
|
||||||
|
|
||||||
with io.open(infile, encoding='utf-8') as inf:
|
|
||||||
issue_template_tmpl = inf.read()
|
|
||||||
|
|
||||||
# Get the version from youtube_dl/version.py without importing the package
|
|
||||||
exec(compile(open('youtube_dl/version.py').read(),
|
|
||||||
'youtube_dl/version.py', 'exec'))
|
|
||||||
|
|
||||||
out = issue_template_tmpl % {'version': locals()['__version__']}
|
|
||||||
|
|
||||||
with io.open(outfile, 'w', encoding='utf-8') as outf:
|
|
||||||
outf.write(out)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
@ -1,100 +0,0 @@
|
|||||||
from __future__ import unicode_literals, print_function
|
|
||||||
|
|
||||||
from inspect import getsource
|
|
||||||
import io
|
|
||||||
import os
|
|
||||||
from os.path import dirname as dirn
|
|
||||||
import sys
|
|
||||||
|
|
||||||
print('WARNING: Lazy loading extractors is an experimental feature that may not always work', file=sys.stderr)
|
|
||||||
|
|
||||||
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
|
|
||||||
|
|
||||||
lazy_extractors_filename = sys.argv[1]
|
|
||||||
if os.path.exists(lazy_extractors_filename):
|
|
||||||
os.remove(lazy_extractors_filename)
|
|
||||||
|
|
||||||
from youtube_dl.extractor import _ALL_CLASSES
|
|
||||||
from youtube_dl.extractor.common import InfoExtractor, SearchInfoExtractor
|
|
||||||
|
|
||||||
with open('devscripts/lazy_load_template.py', 'rt') as f:
|
|
||||||
module_template = f.read()
|
|
||||||
|
|
||||||
module_contents = [
|
|
||||||
module_template + '\n' + getsource(InfoExtractor.suitable) + '\n',
|
|
||||||
'class LazyLoadSearchExtractor(LazyLoadExtractor):\n pass\n']
|
|
||||||
|
|
||||||
ie_template = '''
|
|
||||||
class {name}({bases}):
|
|
||||||
_VALID_URL = {valid_url!r}
|
|
||||||
_module = '{module}'
|
|
||||||
'''
|
|
||||||
|
|
||||||
make_valid_template = '''
|
|
||||||
@classmethod
|
|
||||||
def _make_valid_url(cls):
|
|
||||||
return {valid_url!r}
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
def get_base_name(base):
|
|
||||||
if base is InfoExtractor:
|
|
||||||
return 'LazyLoadExtractor'
|
|
||||||
elif base is SearchInfoExtractor:
|
|
||||||
return 'LazyLoadSearchExtractor'
|
|
||||||
else:
|
|
||||||
return base.__name__
|
|
||||||
|
|
||||||
|
|
||||||
def build_lazy_ie(ie, name):
|
|
||||||
valid_url = getattr(ie, '_VALID_URL', None)
|
|
||||||
s = ie_template.format(
|
|
||||||
name=name,
|
|
||||||
bases=', '.join(map(get_base_name, ie.__bases__)),
|
|
||||||
valid_url=valid_url,
|
|
||||||
module=ie.__module__)
|
|
||||||
if ie.suitable.__func__ is not InfoExtractor.suitable.__func__:
|
|
||||||
s += '\n' + getsource(ie.suitable)
|
|
||||||
if hasattr(ie, '_make_valid_url'):
|
|
||||||
# search extractors
|
|
||||||
s += make_valid_template.format(valid_url=ie._make_valid_url())
|
|
||||||
return s
|
|
||||||
|
|
||||||
|
|
||||||
# find the correct sorting and add the required base classes so that sublcasses
|
|
||||||
# can be correctly created
|
|
||||||
classes = _ALL_CLASSES[:-1]
|
|
||||||
ordered_cls = []
|
|
||||||
while classes:
|
|
||||||
for c in classes[:]:
|
|
||||||
bases = set(c.__bases__) - set((object, InfoExtractor, SearchInfoExtractor))
|
|
||||||
stop = False
|
|
||||||
for b in bases:
|
|
||||||
if b not in classes and b not in ordered_cls:
|
|
||||||
if b.__name__ == 'GenericIE':
|
|
||||||
exit()
|
|
||||||
classes.insert(0, b)
|
|
||||||
stop = True
|
|
||||||
if stop:
|
|
||||||
break
|
|
||||||
if all(b in ordered_cls for b in bases):
|
|
||||||
ordered_cls.append(c)
|
|
||||||
classes.remove(c)
|
|
||||||
break
|
|
||||||
ordered_cls.append(_ALL_CLASSES[-1])
|
|
||||||
|
|
||||||
names = []
|
|
||||||
for ie in ordered_cls:
|
|
||||||
name = ie.__name__
|
|
||||||
src = build_lazy_ie(ie, name)
|
|
||||||
module_contents.append(src)
|
|
||||||
if ie in _ALL_CLASSES:
|
|
||||||
names.append(name)
|
|
||||||
|
|
||||||
module_contents.append(
|
|
||||||
'_ALL_CLASSES = [{0}]'.format(', '.join(names)))
|
|
||||||
|
|
||||||
module_src = '\n'.join(module_contents) + '\n'
|
|
||||||
|
|
||||||
with io.open(lazy_extractors_filename, 'wt', encoding='utf-8') as f:
|
|
||||||
f.write(module_src)
|
|
@ -1,26 +1,20 @@
|
|||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import sys
|
import sys
|
||||||
import re
|
import re
|
||||||
|
|
||||||
README_FILE = 'README.md'
|
README_FILE = 'README.md'
|
||||||
helptext = sys.stdin.read()
|
helptext = sys.stdin.read()
|
||||||
|
|
||||||
if isinstance(helptext, bytes):
|
with open(README_FILE) as f:
|
||||||
helptext = helptext.decode('utf-8')
|
|
||||||
|
|
||||||
with io.open(README_FILE, encoding='utf-8') as f:
|
|
||||||
oldreadme = f.read()
|
oldreadme = f.read()
|
||||||
|
|
||||||
header = oldreadme[:oldreadme.index('# OPTIONS')]
|
header = oldreadme[:oldreadme.index('# OPTIONS')]
|
||||||
footer = oldreadme[oldreadme.index('# CONFIGURATION'):]
|
footer = oldreadme[oldreadme.index('# CONFIGURATION'):]
|
||||||
|
|
||||||
options = helptext[helptext.index(' General Options:') + 19:]
|
options = helptext[helptext.index(' General Options:')+19:]
|
||||||
options = re.sub(r'(?m)^ (\w.+)$', r'## \1', options)
|
options = re.sub(r'^ (\w.+)$', r'## \1', options, flags=re.M)
|
||||||
options = '# OPTIONS\n' + options + '\n'
|
options = '# OPTIONS\n' + options + '\n'
|
||||||
|
|
||||||
with io.open(README_FILE, 'w', encoding='utf-8') as f:
|
with open(README_FILE, 'w') as f:
|
||||||
f.write(header)
|
f.write(header)
|
||||||
f.write(options)
|
f.write(options)
|
||||||
f.write(footer)
|
f.write(footer)
|
||||||
|
@ -1,46 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import optparse
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
# Import youtube_dl
|
|
||||||
ROOT_DIR = os.path.join(os.path.dirname(__file__), '..')
|
|
||||||
sys.path.insert(0, ROOT_DIR)
|
|
||||||
import youtube_dl
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog OUTFILE.md')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 1:
|
|
||||||
parser.error('Expected an output filename')
|
|
||||||
|
|
||||||
outfile, = args
|
|
||||||
|
|
||||||
def gen_ies_md(ies):
|
|
||||||
for ie in ies:
|
|
||||||
ie_md = '**{0}**'.format(ie.IE_NAME)
|
|
||||||
ie_desc = getattr(ie, 'IE_DESC', None)
|
|
||||||
if ie_desc is False:
|
|
||||||
continue
|
|
||||||
if ie_desc is not None:
|
|
||||||
ie_md += ': {0}'.format(ie.IE_DESC)
|
|
||||||
if not ie.working():
|
|
||||||
ie_md += ' (Currently broken)'
|
|
||||||
yield ie_md
|
|
||||||
|
|
||||||
ies = sorted(youtube_dl.gen_extractors(), key=lambda i: i.IE_NAME.lower())
|
|
||||||
out = '# Supported sites\n' + ''.join(
|
|
||||||
' - ' + md + '\n'
|
|
||||||
for md in gen_ies_md(ies))
|
|
||||||
|
|
||||||
with io.open(outfile, 'w', encoding='utf-8') as outf:
|
|
||||||
outf.write(out)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
@ -1,79 +0,0 @@
|
|||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import optparse
|
|
||||||
import os.path
|
|
||||||
import re
|
|
||||||
|
|
||||||
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
|
||||||
README_FILE = os.path.join(ROOT_DIR, 'README.md')
|
|
||||||
|
|
||||||
PREFIX = r'''%YOUTUBE-DL(1)
|
|
||||||
|
|
||||||
# NAME
|
|
||||||
|
|
||||||
youtube\-dl \- download videos from youtube.com or other video platforms
|
|
||||||
|
|
||||||
# SYNOPSIS
|
|
||||||
|
|
||||||
**youtube-dl** \[OPTIONS\] URL [URL...]
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog OUTFILE.md')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 1:
|
|
||||||
parser.error('Expected an output filename')
|
|
||||||
|
|
||||||
outfile, = args
|
|
||||||
|
|
||||||
with io.open(README_FILE, encoding='utf-8') as f:
|
|
||||||
readme = f.read()
|
|
||||||
|
|
||||||
readme = re.sub(r'(?s)^.*?(?=# DESCRIPTION)', '', readme)
|
|
||||||
readme = re.sub(r'\s+youtube-dl \[OPTIONS\] URL \[URL\.\.\.\]', '', readme)
|
|
||||||
readme = PREFIX + readme
|
|
||||||
|
|
||||||
readme = filter_options(readme)
|
|
||||||
|
|
||||||
with io.open(outfile, 'w', encoding='utf-8') as outf:
|
|
||||||
outf.write(readme)
|
|
||||||
|
|
||||||
|
|
||||||
def filter_options(readme):
|
|
||||||
ret = ''
|
|
||||||
in_options = False
|
|
||||||
for line in readme.split('\n'):
|
|
||||||
if line.startswith('# '):
|
|
||||||
if line[2:].startswith('OPTIONS'):
|
|
||||||
in_options = True
|
|
||||||
else:
|
|
||||||
in_options = False
|
|
||||||
|
|
||||||
if in_options:
|
|
||||||
if line.lstrip().startswith('-'):
|
|
||||||
split = re.split(r'\s{2,}', line.lstrip())
|
|
||||||
# Description string may start with `-` as well. If there is
|
|
||||||
# only one piece then it's a description bit not an option.
|
|
||||||
if len(split) > 1:
|
|
||||||
option, description = split
|
|
||||||
split_option = option.split(' ')
|
|
||||||
|
|
||||||
if not split_option[-1].startswith('-'): # metavar
|
|
||||||
option = ' '.join(split_option[:-1] + ['*%s*' % split_option[-1]])
|
|
||||||
|
|
||||||
# Pandoc's definition_lists. See http://pandoc.org/README.html
|
|
||||||
# for more information.
|
|
||||||
ret += '\n%s\n: %s\n' % (option, description)
|
|
||||||
continue
|
|
||||||
ret += line.lstrip() + '\n'
|
|
||||||
else:
|
|
||||||
ret += line + '\n'
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
@ -6,7 +6,7 @@
|
|||||||
# * the git config user.signingkey is properly set
|
# * the git config user.signingkey is properly set
|
||||||
|
|
||||||
# You will need
|
# You will need
|
||||||
# pip install coverage nose rsa wheel
|
# pip install coverage nose rsa
|
||||||
|
|
||||||
# TODO
|
# TODO
|
||||||
# release notes
|
# release notes
|
||||||
@ -14,57 +14,20 @@
|
|||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
skip_tests=true
|
skip_tests=false
|
||||||
gpg_sign_commits=""
|
if [ "$1" = '--skip-test' ]; then
|
||||||
buildserver='localhost:8142'
|
skip_tests=true
|
||||||
|
shift
|
||||||
while true
|
fi
|
||||||
do
|
|
||||||
case "$1" in
|
|
||||||
--run-tests)
|
|
||||||
skip_tests=false
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--gpg-sign-commits|-S)
|
|
||||||
gpg_sign_commits="-S"
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--buildserver)
|
|
||||||
buildserver="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--*)
|
|
||||||
echo "ERROR: unknown option $1"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
break
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ -z "$1" ]; then echo "ERROR: specify version number like this: $0 1994.09.06"; exit 1; fi
|
if [ -z "$1" ]; then echo "ERROR: specify version number like this: $0 1994.09.06"; exit 1; fi
|
||||||
version="$1"
|
version="$1"
|
||||||
major_version=$(echo "$version" | sed -n 's#^\([0-9]*\.[0-9]*\.[0-9]*\).*#\1#p')
|
|
||||||
if test "$major_version" '!=' "$(date '+%Y.%m.%d')"; then
|
|
||||||
echo "$version does not start with today's date!"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -z "`git tag | grep "$version"`" ]; then echo 'ERROR: version already present'; exit 1; fi
|
if [ ! -z "`git tag | grep "$version"`" ]; then echo 'ERROR: version already present'; exit 1; fi
|
||||||
if [ ! -z "`git status --porcelain | grep -v CHANGELOG`" ]; then echo 'ERROR: the working directory is not clean; commit or stash changes'; exit 1; fi
|
if [ ! -z "`git status --porcelain | grep -v CHANGELOG`" ]; then echo 'ERROR: the working directory is not clean; commit or stash changes'; exit 1; fi
|
||||||
useless_files=$(find youtube_dl -type f -not -name '*.py')
|
|
||||||
if [ ! -z "$useless_files" ]; then echo "ERROR: Non-.py files in youtube_dl: $useless_files"; exit 1; fi
|
|
||||||
if [ ! -f "updates_key.pem" ]; then echo 'ERROR: updates_key.pem missing'; exit 1; fi
|
if [ ! -f "updates_key.pem" ]; then echo 'ERROR: updates_key.pem missing'; exit 1; fi
|
||||||
if ! type pandoc >/dev/null 2>/dev/null; then echo 'ERROR: pandoc is missing'; exit 1; fi
|
|
||||||
if ! python3 -c 'import rsa' 2>/dev/null; then echo 'ERROR: python3-rsa is missing'; exit 1; fi
|
|
||||||
if ! python3 -c 'import wheel' 2>/dev/null; then echo 'ERROR: wheel is missing'; exit 1; fi
|
|
||||||
|
|
||||||
read -p "Is ChangeLog up to date? (y/n) " -n 1
|
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 1; fi
|
|
||||||
|
|
||||||
/bin/echo -e "\n### First of all, testing..."
|
/bin/echo -e "\n### First of all, testing..."
|
||||||
make clean
|
make cleanall
|
||||||
if $skip_tests ; then
|
if $skip_tests ; then
|
||||||
echo 'SKIPPING TESTS'
|
echo 'SKIPPING TESTS'
|
||||||
else
|
else
|
||||||
@ -74,13 +37,10 @@ fi
|
|||||||
/bin/echo -e "\n### Changing version in version.py..."
|
/bin/echo -e "\n### Changing version in version.py..."
|
||||||
sed -i "s/__version__ = '.*'/__version__ = '$version'/" youtube_dl/version.py
|
sed -i "s/__version__ = '.*'/__version__ = '$version'/" youtube_dl/version.py
|
||||||
|
|
||||||
/bin/echo -e "\n### Changing version in ChangeLog..."
|
/bin/echo -e "\n### Committing CHANGELOG README.md and youtube_dl/version.py..."
|
||||||
sed -i "s/<unreleased>/$version/" ChangeLog
|
make README.md
|
||||||
|
git add CHANGELOG README.md youtube_dl/version.py
|
||||||
/bin/echo -e "\n### Committing documentation, templates and youtube_dl/version.py..."
|
git commit -m "release $version"
|
||||||
make README.md CONTRIBUTING.md issuetemplates supportedsites
|
|
||||||
git add README.md CONTRIBUTING.md .github/ISSUE_TEMPLATE/1_broken_site.md .github/ISSUE_TEMPLATE/2_site_support_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md .github/ISSUE_TEMPLATE/4_bug_report.md .github/ISSUE_TEMPLATE/5_feature_request.md .github/ISSUE_TEMPLATE/6_question.md docs/supportedsites.md youtube_dl/version.py ChangeLog
|
|
||||||
git commit $gpg_sign_commits -m "release $version"
|
|
||||||
|
|
||||||
/bin/echo -e "\n### Now tagging, signing and pushing..."
|
/bin/echo -e "\n### Now tagging, signing and pushing..."
|
||||||
git tag -s -m "Release $version" "$version"
|
git tag -s -m "Release $version" "$version"
|
||||||
@ -96,7 +56,7 @@ git push origin "$version"
|
|||||||
REV=$(git rev-parse HEAD)
|
REV=$(git rev-parse HEAD)
|
||||||
make youtube-dl youtube-dl.tar.gz
|
make youtube-dl youtube-dl.tar.gz
|
||||||
read -p "VM running? (y/n) " -n 1
|
read -p "VM running? (y/n) " -n 1
|
||||||
wget "http://$buildserver/build/ytdl-org/youtube-dl/youtube-dl.exe?rev=$REV" -O youtube-dl.exe
|
wget "http://localhost:8142/build/rg3/youtube-dl/youtube-dl.exe?rev=$REV" -O youtube-dl.exe
|
||||||
mkdir -p "build/$version"
|
mkdir -p "build/$version"
|
||||||
mv youtube-dl youtube-dl.exe "build/$version"
|
mv youtube-dl youtube-dl.exe "build/$version"
|
||||||
mv youtube-dl.tar.gz "build/$version/youtube-dl-$version.tar.gz"
|
mv youtube-dl.tar.gz "build/$version/youtube-dl-$version.tar.gz"
|
||||||
@ -105,17 +65,17 @@ RELEASE_FILES="youtube-dl youtube-dl.exe youtube-dl-$version.tar.gz"
|
|||||||
(cd build/$version/ && sha1sum $RELEASE_FILES > SHA1SUMS)
|
(cd build/$version/ && sha1sum $RELEASE_FILES > SHA1SUMS)
|
||||||
(cd build/$version/ && sha256sum $RELEASE_FILES > SHA2-256SUMS)
|
(cd build/$version/ && sha256sum $RELEASE_FILES > SHA2-256SUMS)
|
||||||
(cd build/$version/ && sha512sum $RELEASE_FILES > SHA2-512SUMS)
|
(cd build/$version/ && sha512sum $RELEASE_FILES > SHA2-512SUMS)
|
||||||
|
git checkout HEAD -- youtube-dl youtube-dl.exe
|
||||||
|
|
||||||
/bin/echo -e "\n### Signing and uploading the new binaries to GitHub..."
|
/bin/echo -e "\n### Signing and uploading the new binaries to yt-dl.org ..."
|
||||||
for f in $RELEASE_FILES; do gpg --passphrase-repeat 5 --detach-sig "build/$version/$f"; done
|
for f in $RELEASE_FILES; do gpg --detach-sig "build/$version/$f"; done
|
||||||
|
scp -r "build/$version" ytdl@yt-dl.org:html/tmp/
|
||||||
ROOT=$(pwd)
|
ssh ytdl@yt-dl.org "mv html/tmp/$version html/downloads/"
|
||||||
python devscripts/create-github-release.py ChangeLog $version "$ROOT/build/$version"
|
|
||||||
|
|
||||||
ssh ytdl@yt-dl.org "sh html/update_latest.sh $version"
|
ssh ytdl@yt-dl.org "sh html/update_latest.sh $version"
|
||||||
|
|
||||||
/bin/echo -e "\n### Now switching to gh-pages..."
|
/bin/echo -e "\n### Now switching to gh-pages..."
|
||||||
git clone --branch gh-pages --single-branch . build/gh-pages
|
git clone --branch gh-pages --single-branch . build/gh-pages
|
||||||
|
ROOT=$(pwd)
|
||||||
(
|
(
|
||||||
set -e
|
set -e
|
||||||
ORIGIN_URL=$(git config --get remote.origin.url)
|
ORIGIN_URL=$(git config --get remote.origin.url)
|
||||||
@ -127,7 +87,7 @@ git clone --branch gh-pages --single-branch . build/gh-pages
|
|||||||
"$ROOT/devscripts/gh-pages/update-copyright.py"
|
"$ROOT/devscripts/gh-pages/update-copyright.py"
|
||||||
"$ROOT/devscripts/gh-pages/update-sites.py"
|
"$ROOT/devscripts/gh-pages/update-sites.py"
|
||||||
git add *.html *.html.in update
|
git add *.html *.html.in update
|
||||||
git commit $gpg_sign_commits -m "release $version"
|
git commit -m "release $version"
|
||||||
git push "$ROOT" gh-pages
|
git push "$ROOT" gh-pages
|
||||||
git push "$ORIGIN_URL" gh-pages
|
git push "$ORIGIN_URL" gh-pages
|
||||||
)
|
)
|
||||||
@ -135,7 +95,7 @@ rm -rf build
|
|||||||
|
|
||||||
make pypi-files
|
make pypi-files
|
||||||
echo "Uploading to PyPi ..."
|
echo "Uploading to PyPi ..."
|
||||||
python setup.py sdist bdist_wheel upload
|
python setup.py sdist upload
|
||||||
make clean
|
make clean
|
||||||
|
|
||||||
/bin/echo -e "\n### DONE!"
|
/bin/echo -e "\n### DONE!"
|
||||||
|
@ -1,22 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Keep this list in sync with the `offlinetest` target in Makefile
|
|
||||||
DOWNLOAD_TESTS="age_restriction|download|iqiyi_sdk_interpreter|socks|subtitles|write_annotations|youtube_lists|youtube_signature"
|
|
||||||
|
|
||||||
test_set=""
|
|
||||||
multiprocess_args=""
|
|
||||||
|
|
||||||
case "$YTDL_TEST_SET" in
|
|
||||||
core)
|
|
||||||
test_set="-I test_($DOWNLOAD_TESTS)\.py"
|
|
||||||
;;
|
|
||||||
download)
|
|
||||||
test_set="-I test_(?!$DOWNLOAD_TESTS).+\.py"
|
|
||||||
multiprocess_args="--processes=4 --process-timeout=540"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
break
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
nosetests test --verbose $test_set $multiprocess_args
|
|
@ -1,47 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import itertools
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_print,
|
|
||||||
compat_urllib_request,
|
|
||||||
)
|
|
||||||
from youtube_dl.utils import format_bytes
|
|
||||||
|
|
||||||
|
|
||||||
def format_size(bytes):
|
|
||||||
return '%s (%d bytes)' % (format_bytes(bytes), bytes)
|
|
||||||
|
|
||||||
|
|
||||||
total_bytes = 0
|
|
||||||
|
|
||||||
for page in itertools.count(1):
|
|
||||||
releases = json.loads(compat_urllib_request.urlopen(
|
|
||||||
'https://api.github.com/repos/ytdl-org/youtube-dl/releases?page=%s' % page
|
|
||||||
).read().decode('utf-8'))
|
|
||||||
|
|
||||||
if not releases:
|
|
||||||
break
|
|
||||||
|
|
||||||
for release in releases:
|
|
||||||
compat_print(release['name'])
|
|
||||||
for asset in release['assets']:
|
|
||||||
asset_name = asset['name']
|
|
||||||
total_bytes += asset['download_count'] * asset['size']
|
|
||||||
if all(not re.match(p, asset_name) for p in (
|
|
||||||
r'^youtube-dl$',
|
|
||||||
r'^youtube-dl-\d{4}\.\d{2}\.\d{2}(?:\.\d+)?\.tar\.gz$',
|
|
||||||
r'^youtube-dl\.exe$')):
|
|
||||||
continue
|
|
||||||
compat_print(
|
|
||||||
' %s size: %s downloads: %d'
|
|
||||||
% (asset_name, format_size(asset['size']), asset['download_count']))
|
|
||||||
|
|
||||||
compat_print('total downloads traffic: %s' % format_size(total_bytes))
|
|
40
devscripts/transition_helper.py
Normal file
40
devscripts/transition_helper.py
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import sys, os
|
||||||
|
|
||||||
|
try:
|
||||||
|
import urllib.request as compat_urllib_request
|
||||||
|
except ImportError: # Python 2
|
||||||
|
import urllib2 as compat_urllib_request
|
||||||
|
|
||||||
|
sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
|
||||||
|
sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
|
||||||
|
sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n')
|
||||||
|
|
||||||
|
try:
|
||||||
|
raw_input()
|
||||||
|
except NameError: # Python 3
|
||||||
|
input()
|
||||||
|
|
||||||
|
filename = sys.argv[0]
|
||||||
|
|
||||||
|
API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
|
||||||
|
BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl"
|
||||||
|
|
||||||
|
if not os.access(filename, os.W_OK):
|
||||||
|
sys.exit('ERROR: no write permissions on %s' % filename)
|
||||||
|
|
||||||
|
try:
|
||||||
|
urlh = compat_urllib_request.urlopen(BIN_URL)
|
||||||
|
newcontent = urlh.read()
|
||||||
|
urlh.close()
|
||||||
|
except (IOError, OSError) as err:
|
||||||
|
sys.exit('ERROR: unable to download latest version')
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(filename, 'wb') as outf:
|
||||||
|
outf.write(newcontent)
|
||||||
|
except (IOError, OSError) as err:
|
||||||
|
sys.exit('ERROR: unable to overwrite current version')
|
||||||
|
|
||||||
|
sys.stderr.write(u'Done! Now you can run youtube-dl.\n')
|
12
devscripts/transition_helper_exe/setup.py
Normal file
12
devscripts/transition_helper_exe/setup.py
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
from distutils.core import setup
|
||||||
|
import py2exe
|
||||||
|
|
||||||
|
py2exe_options = {
|
||||||
|
"bundle_files": 1,
|
||||||
|
"compressed": 1,
|
||||||
|
"optimize": 2,
|
||||||
|
"dist_dir": '.',
|
||||||
|
"dll_excludes": ['w9xpopen.exe']
|
||||||
|
}
|
||||||
|
|
||||||
|
setup(console=['youtube-dl.py'], options={ "py2exe": py2exe_options }, zipfile=None)
|
102
devscripts/transition_helper_exe/youtube-dl.py
Normal file
102
devscripts/transition_helper_exe/youtube-dl.py
Normal file
@ -0,0 +1,102 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import sys, os
|
||||||
|
import urllib2
|
||||||
|
import json, hashlib
|
||||||
|
|
||||||
|
def rsa_verify(message, signature, key):
|
||||||
|
from struct import pack
|
||||||
|
from hashlib import sha256
|
||||||
|
from sys import version_info
|
||||||
|
def b(x):
|
||||||
|
if version_info[0] == 2: return x
|
||||||
|
else: return x.encode('latin1')
|
||||||
|
assert(type(message) == type(b('')))
|
||||||
|
block_size = 0
|
||||||
|
n = key[0]
|
||||||
|
while n:
|
||||||
|
block_size += 1
|
||||||
|
n >>= 8
|
||||||
|
signature = pow(int(signature, 16), key[1], key[0])
|
||||||
|
raw_bytes = []
|
||||||
|
while signature:
|
||||||
|
raw_bytes.insert(0, pack("B", signature & 0xFF))
|
||||||
|
signature >>= 8
|
||||||
|
signature = (block_size - len(raw_bytes)) * b('\x00') + b('').join(raw_bytes)
|
||||||
|
if signature[0:2] != b('\x00\x01'): return False
|
||||||
|
signature = signature[2:]
|
||||||
|
if not b('\x00') in signature: return False
|
||||||
|
signature = signature[signature.index(b('\x00'))+1:]
|
||||||
|
if not signature.startswith(b('\x30\x31\x30\x0D\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20')): return False
|
||||||
|
signature = signature[19:]
|
||||||
|
if signature != sha256(message).digest(): return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
|
||||||
|
sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
|
||||||
|
sys.stderr.write(u'From now on, get the binaries from http://rg3.github.com/youtube-dl/download.html, not from the git repository.\n\n')
|
||||||
|
|
||||||
|
raw_input()
|
||||||
|
|
||||||
|
filename = sys.argv[0]
|
||||||
|
|
||||||
|
UPDATE_URL = "http://rg3.github.io/youtube-dl/update/"
|
||||||
|
VERSION_URL = UPDATE_URL + 'LATEST_VERSION'
|
||||||
|
JSON_URL = UPDATE_URL + 'versions.json'
|
||||||
|
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
|
||||||
|
|
||||||
|
if not os.access(filename, os.W_OK):
|
||||||
|
sys.exit('ERROR: no write permissions on %s' % filename)
|
||||||
|
|
||||||
|
exe = os.path.abspath(filename)
|
||||||
|
directory = os.path.dirname(exe)
|
||||||
|
if not os.access(directory, os.W_OK):
|
||||||
|
sys.exit('ERROR: no write permissions on %s' % directory)
|
||||||
|
|
||||||
|
try:
|
||||||
|
versions_info = urllib2.urlopen(JSON_URL).read().decode('utf-8')
|
||||||
|
versions_info = json.loads(versions_info)
|
||||||
|
except:
|
||||||
|
sys.exit(u'ERROR: can\'t obtain versions info. Please try again later.')
|
||||||
|
if not 'signature' in versions_info:
|
||||||
|
sys.exit(u'ERROR: the versions file is not signed or corrupted. Aborting.')
|
||||||
|
signature = versions_info['signature']
|
||||||
|
del versions_info['signature']
|
||||||
|
if not rsa_verify(json.dumps(versions_info, sort_keys=True), signature, UPDATES_RSA_KEY):
|
||||||
|
sys.exit(u'ERROR: the versions file signature is invalid. Aborting.')
|
||||||
|
|
||||||
|
version = versions_info['versions'][versions_info['latest']]
|
||||||
|
|
||||||
|
try:
|
||||||
|
urlh = urllib2.urlopen(version['exe'][0])
|
||||||
|
newcontent = urlh.read()
|
||||||
|
urlh.close()
|
||||||
|
except (IOError, OSError) as err:
|
||||||
|
sys.exit('ERROR: unable to download latest version')
|
||||||
|
|
||||||
|
newcontent_hash = hashlib.sha256(newcontent).hexdigest()
|
||||||
|
if newcontent_hash != version['exe'][1]:
|
||||||
|
sys.exit(u'ERROR: the downloaded file hash does not match. Aborting.')
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(exe + '.new', 'wb') as outf:
|
||||||
|
outf.write(newcontent)
|
||||||
|
except (IOError, OSError) as err:
|
||||||
|
sys.exit(u'ERROR: unable to write the new version')
|
||||||
|
|
||||||
|
try:
|
||||||
|
bat = os.path.join(directory, 'youtube-dl-updater.bat')
|
||||||
|
b = open(bat, 'w')
|
||||||
|
b.write("""
|
||||||
|
echo Updating youtube-dl...
|
||||||
|
ping 127.0.0.1 -n 5 -w 1000 > NUL
|
||||||
|
move /Y "%s.new" "%s"
|
||||||
|
del "%s"
|
||||||
|
\n""" %(exe, exe, bat))
|
||||||
|
b.close()
|
||||||
|
|
||||||
|
os.startfile(bat)
|
||||||
|
except (IOError, OSError) as err:
|
||||||
|
sys.exit('ERROR: unable to overwrite current version')
|
||||||
|
|
||||||
|
sys.stderr.write(u'Done! Now you can run youtube-dl.\n')
|
@ -1,28 +0,0 @@
|
|||||||
#compdef youtube-dl
|
|
||||||
|
|
||||||
__youtube_dl() {
|
|
||||||
local curcontext="$curcontext" fileopts diropts cur prev
|
|
||||||
typeset -A opt_args
|
|
||||||
fileopts="{{fileopts}}"
|
|
||||||
diropts="{{diropts}}"
|
|
||||||
cur=$words[CURRENT]
|
|
||||||
case $cur in
|
|
||||||
:)
|
|
||||||
_arguments '*: :(::ytfavorites ::ytrecommended ::ytsubscriptions ::ytwatchlater ::ythistory)'
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
prev=$words[CURRENT-1]
|
|
||||||
if [[ ${prev} =~ ${fileopts} ]]; then
|
|
||||||
_path_files
|
|
||||||
elif [[ ${prev} =~ ${diropts} ]]; then
|
|
||||||
_path_files -/
|
|
||||||
elif [[ ${prev} == "--recode-video" ]]; then
|
|
||||||
_arguments '*: :(mp4 flv ogg webm mkv)'
|
|
||||||
else
|
|
||||||
_arguments '*: :({{flags}})'
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
__youtube_dl
|
|
@ -1,49 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
|
||||||
from os.path import dirname as dirn
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
|
|
||||||
import youtube_dl
|
|
||||||
|
|
||||||
ZSH_COMPLETION_FILE = "youtube-dl.zsh"
|
|
||||||
ZSH_COMPLETION_TEMPLATE = "devscripts/zsh-completion.in"
|
|
||||||
|
|
||||||
|
|
||||||
def build_completion(opt_parser):
|
|
||||||
opts = [opt for group in opt_parser.option_groups
|
|
||||||
for opt in group.option_list]
|
|
||||||
opts_file = [opt for opt in opts if opt.metavar == "FILE"]
|
|
||||||
opts_dir = [opt for opt in opts if opt.metavar == "DIR"]
|
|
||||||
|
|
||||||
fileopts = []
|
|
||||||
for opt in opts_file:
|
|
||||||
if opt._short_opts:
|
|
||||||
fileopts.extend(opt._short_opts)
|
|
||||||
if opt._long_opts:
|
|
||||||
fileopts.extend(opt._long_opts)
|
|
||||||
|
|
||||||
diropts = []
|
|
||||||
for opt in opts_dir:
|
|
||||||
if opt._short_opts:
|
|
||||||
diropts.extend(opt._short_opts)
|
|
||||||
if opt._long_opts:
|
|
||||||
diropts.extend(opt._long_opts)
|
|
||||||
|
|
||||||
flags = [opt.get_opt_string() for opt in opts]
|
|
||||||
|
|
||||||
with open(ZSH_COMPLETION_TEMPLATE) as f:
|
|
||||||
template = f.read()
|
|
||||||
|
|
||||||
template = template.replace("{{fileopts}}", "|".join(fileopts))
|
|
||||||
template = template.replace("{{diropts}}", "|".join(diropts))
|
|
||||||
template = template.replace("{{flags}}", " ".join(flags))
|
|
||||||
|
|
||||||
with open(ZSH_COMPLETION_FILE, "w") as f:
|
|
||||||
f.write(template)
|
|
||||||
|
|
||||||
|
|
||||||
parser = youtube_dl.parseOpts()[0]
|
|
||||||
build_completion(parser)
|
|
1
docs/.gitignore
vendored
1
docs/.gitignore
vendored
@ -1 +0,0 @@
|
|||||||
_build/
|
|
177
docs/Makefile
177
docs/Makefile
@ -1,177 +0,0 @@
|
|||||||
# Makefile for Sphinx documentation
|
|
||||||
#
|
|
||||||
|
|
||||||
# You can set these variables from the command line.
|
|
||||||
SPHINXOPTS =
|
|
||||||
SPHINXBUILD = sphinx-build
|
|
||||||
PAPER =
|
|
||||||
BUILDDIR = _build
|
|
||||||
|
|
||||||
# User-friendly check for sphinx-build
|
|
||||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
|
||||||
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
|
|
||||||
endif
|
|
||||||
|
|
||||||
# Internal variables.
|
|
||||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
|
||||||
PAPEROPT_letter = -D latex_paper_size=letter
|
|
||||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
|
||||||
# the i18n builder cannot share the environment and doctrees with the others
|
|
||||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
|
||||||
|
|
||||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
|
|
||||||
|
|
||||||
help:
|
|
||||||
@echo "Please use \`make <target>' where <target> is one of"
|
|
||||||
@echo " html to make standalone HTML files"
|
|
||||||
@echo " dirhtml to make HTML files named index.html in directories"
|
|
||||||
@echo " singlehtml to make a single large HTML file"
|
|
||||||
@echo " pickle to make pickle files"
|
|
||||||
@echo " json to make JSON files"
|
|
||||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
|
||||||
@echo " qthelp to make HTML files and a qthelp project"
|
|
||||||
@echo " devhelp to make HTML files and a Devhelp project"
|
|
||||||
@echo " epub to make an epub"
|
|
||||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
|
||||||
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
|
||||||
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
|
|
||||||
@echo " text to make text files"
|
|
||||||
@echo " man to make manual pages"
|
|
||||||
@echo " texinfo to make Texinfo files"
|
|
||||||
@echo " info to make Texinfo files and run them through makeinfo"
|
|
||||||
@echo " gettext to make PO message catalogs"
|
|
||||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
|
||||||
@echo " xml to make Docutils-native XML files"
|
|
||||||
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
|
|
||||||
@echo " linkcheck to check all external links for integrity"
|
|
||||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
|
||||||
|
|
||||||
clean:
|
|
||||||
rm -rf $(BUILDDIR)/*
|
|
||||||
|
|
||||||
html:
|
|
||||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
|
||||||
|
|
||||||
dirhtml:
|
|
||||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
|
||||||
|
|
||||||
singlehtml:
|
|
||||||
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
|
||||||
|
|
||||||
pickle:
|
|
||||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; now you can process the pickle files."
|
|
||||||
|
|
||||||
json:
|
|
||||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; now you can process the JSON files."
|
|
||||||
|
|
||||||
htmlhelp:
|
|
||||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
|
||||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
|
||||||
|
|
||||||
qthelp:
|
|
||||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
|
||||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
|
||||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/youtube-dl.qhcp"
|
|
||||||
@echo "To view the help file:"
|
|
||||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/youtube-dl.qhc"
|
|
||||||
|
|
||||||
devhelp:
|
|
||||||
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
|
||||||
@echo
|
|
||||||
@echo "Build finished."
|
|
||||||
@echo "To view the help file:"
|
|
||||||
@echo "# mkdir -p $$HOME/.local/share/devhelp/youtube-dl"
|
|
||||||
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/youtube-dl"
|
|
||||||
@echo "# devhelp"
|
|
||||||
|
|
||||||
epub:
|
|
||||||
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
|
||||||
|
|
||||||
latex:
|
|
||||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
|
||||||
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
|
||||||
"(use \`make latexpdf' here to do that automatically)."
|
|
||||||
|
|
||||||
latexpdf:
|
|
||||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
|
||||||
@echo "Running LaTeX files through pdflatex..."
|
|
||||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf
|
|
||||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
|
||||||
|
|
||||||
latexpdfja:
|
|
||||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
|
||||||
@echo "Running LaTeX files through platex and dvipdfmx..."
|
|
||||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
|
|
||||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
|
||||||
|
|
||||||
text:
|
|
||||||
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
|
||||||
|
|
||||||
man:
|
|
||||||
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
|
||||||
|
|
||||||
texinfo:
|
|
||||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
|
|
||||||
@echo "Run \`make' in that directory to run these through makeinfo" \
|
|
||||||
"(use \`make info' here to do that automatically)."
|
|
||||||
|
|
||||||
info:
|
|
||||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
|
||||||
@echo "Running Texinfo files through makeinfo..."
|
|
||||||
make -C $(BUILDDIR)/texinfo info
|
|
||||||
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
|
|
||||||
|
|
||||||
gettext:
|
|
||||||
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
|
|
||||||
|
|
||||||
changes:
|
|
||||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
|
||||||
@echo
|
|
||||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
|
||||||
|
|
||||||
linkcheck:
|
|
||||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
|
||||||
@echo
|
|
||||||
@echo "Link check complete; look for any errors in the above output " \
|
|
||||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
|
||||||
|
|
||||||
doctest:
|
|
||||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
|
||||||
@echo "Testing of doctests in the sources finished, look at the " \
|
|
||||||
"results in $(BUILDDIR)/doctest/output.txt."
|
|
||||||
|
|
||||||
xml:
|
|
||||||
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
|
|
||||||
|
|
||||||
pseudoxml:
|
|
||||||
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
|
|
71
docs/conf.py
71
docs/conf.py
@ -1,71 +0,0 @@
|
|||||||
# coding: utf-8
|
|
||||||
#
|
|
||||||
# youtube-dl documentation build configuration file, created by
|
|
||||||
# sphinx-quickstart on Fri Mar 14 21:05:43 2014.
|
|
||||||
#
|
|
||||||
# This file is execfile()d with the current directory set to its
|
|
||||||
# containing dir.
|
|
||||||
#
|
|
||||||
# Note that not all possible configuration values are present in this
|
|
||||||
# autogenerated file.
|
|
||||||
#
|
|
||||||
# All configuration values have a default; values that are commented out
|
|
||||||
# serve to show the default.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
# Allows to import youtube_dl
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
# -- General configuration ------------------------------------------------
|
|
||||||
|
|
||||||
# Add any Sphinx extension module names here, as strings. They can be
|
|
||||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
|
||||||
# ones.
|
|
||||||
extensions = [
|
|
||||||
'sphinx.ext.autodoc',
|
|
||||||
]
|
|
||||||
|
|
||||||
# Add any paths that contain templates here, relative to this directory.
|
|
||||||
templates_path = ['_templates']
|
|
||||||
|
|
||||||
# The suffix of source filenames.
|
|
||||||
source_suffix = '.rst'
|
|
||||||
|
|
||||||
# The master toctree document.
|
|
||||||
master_doc = 'index'
|
|
||||||
|
|
||||||
# General information about the project.
|
|
||||||
project = u'youtube-dl'
|
|
||||||
copyright = u'2014, Ricardo Garcia Gonzalez'
|
|
||||||
|
|
||||||
# The version info for the project you're documenting, acts as replacement for
|
|
||||||
# |version| and |release|, also used in various other places throughout the
|
|
||||||
# built documents.
|
|
||||||
#
|
|
||||||
# The short X.Y version.
|
|
||||||
from youtube_dl.version import __version__
|
|
||||||
version = __version__
|
|
||||||
# The full version, including alpha/beta/rc tags.
|
|
||||||
release = version
|
|
||||||
|
|
||||||
# List of patterns, relative to source directory, that match files and
|
|
||||||
# directories to ignore when looking for source files.
|
|
||||||
exclude_patterns = ['_build']
|
|
||||||
|
|
||||||
# The name of the Pygments (syntax highlighting) style to use.
|
|
||||||
pygments_style = 'sphinx'
|
|
||||||
|
|
||||||
# -- Options for HTML output ----------------------------------------------
|
|
||||||
|
|
||||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
|
||||||
# a list of builtin themes.
|
|
||||||
html_theme = 'default'
|
|
||||||
|
|
||||||
# Add any paths that contain custom static files (such as style sheets) here,
|
|
||||||
# relative to this directory. They are copied after the builtin static files,
|
|
||||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
|
||||||
html_static_path = ['_static']
|
|
||||||
|
|
||||||
# Output file base name for HTML help builder.
|
|
||||||
htmlhelp_basename = 'youtube-dldoc'
|
|
@ -1,23 +0,0 @@
|
|||||||
Welcome to youtube-dl's documentation!
|
|
||||||
======================================
|
|
||||||
|
|
||||||
*youtube-dl* is a command-line program to download videos from YouTube.com and more sites.
|
|
||||||
It can also be used in Python code.
|
|
||||||
|
|
||||||
Developer guide
|
|
||||||
---------------
|
|
||||||
|
|
||||||
This section contains information for using *youtube-dl* from Python programs.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
|
|
||||||
module_guide
|
|
||||||
|
|
||||||
Indices and tables
|
|
||||||
==================
|
|
||||||
|
|
||||||
* :ref:`genindex`
|
|
||||||
* :ref:`modindex`
|
|
||||||
* :ref:`search`
|
|
||||||
|
|
@ -1,67 +0,0 @@
|
|||||||
Using the ``youtube_dl`` module
|
|
||||||
===============================
|
|
||||||
|
|
||||||
When using the ``youtube_dl`` module, you start by creating an instance of :class:`YoutubeDL` and adding all the available extractors:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> from youtube_dl import YoutubeDL
|
|
||||||
>>> ydl = YoutubeDL()
|
|
||||||
>>> ydl.add_default_info_extractors()
|
|
||||||
|
|
||||||
Extracting video information
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
You use the :meth:`YoutubeDL.extract_info` method for getting the video information, which returns a dictionary:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> info = ydl.extract_info('http://www.youtube.com/watch?v=BaW_jenozKc', download=False)
|
|
||||||
[youtube] Setting language
|
|
||||||
[youtube] BaW_jenozKc: Downloading webpage
|
|
||||||
[youtube] BaW_jenozKc: Downloading video info webpage
|
|
||||||
[youtube] BaW_jenozKc: Extracting video information
|
|
||||||
>>> info['title']
|
|
||||||
'youtube-dl test video "\'/\\ä↭𝕐'
|
|
||||||
>>> info['height'], info['width']
|
|
||||||
(720, 1280)
|
|
||||||
|
|
||||||
If you want to download or play the video you can get its url:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> info['url']
|
|
||||||
'https://...'
|
|
||||||
|
|
||||||
Extracting playlist information
|
|
||||||
-------------------------------
|
|
||||||
|
|
||||||
The playlist information is extracted in a similar way, but the dictionary is a bit different:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> playlist = ydl.extract_info('http://www.ted.com/playlists/13/open_source_open_world', download=False)
|
|
||||||
[TED] open_source_open_world: Downloading playlist webpage
|
|
||||||
...
|
|
||||||
>>> playlist['title']
|
|
||||||
'Open-source, open world'
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
You can access the videos in the playlist with the ``entries`` field:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> for video in playlist['entries']:
|
|
||||||
... print('Video #%d: %s' % (video['playlist_index'], video['title']))
|
|
||||||
|
|
||||||
Video #1: How Arduino is open-sourcing imagination
|
|
||||||
Video #2: The year open data went worldwide
|
|
||||||
Video #3: Massive-scale online collaboration
|
|
||||||
Video #4: The art of asking
|
|
||||||
Video #5: How cognitive surplus will change the world
|
|
||||||
Video #6: The birth of Wikipedia
|
|
||||||
Video #7: Coding a better government
|
|
||||||
Video #8: The era of open innovation
|
|
||||||
Video #9: The currency of the new economy is trust
|
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
@ -1,6 +0,0 @@
|
|||||||
[wheel]
|
|
||||||
universal = True
|
|
||||||
|
|
||||||
[flake8]
|
|
||||||
exclude = youtube_dl/extractor/__init__.py,devscripts/buildserver.py,devscripts/lazy_load_template.py,devscripts/make_issue_template.py,setup.py,build,.git,venv
|
|
||||||
ignore = E402,E501,E731,E741,W503
|
|
128
setup.py
128
setup.py
@ -1,19 +1,17 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import print_function
|
from __future__ import print_function
|
||||||
|
|
||||||
import os.path
|
import pkg_resources
|
||||||
import warnings
|
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from setuptools import setup, Command
|
from setuptools import setup
|
||||||
setuptools_available = True
|
setuptools_available = True
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from distutils.core import setup, Command
|
from distutils.core import setup
|
||||||
setuptools_available = False
|
setuptools_available = False
|
||||||
from distutils.spawn import spawn
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# This will create an exe that needs Microsoft Visual C++ 2008
|
# This will create an exe that needs Microsoft Visual C++ 2008
|
||||||
@ -21,128 +19,74 @@ try:
|
|||||||
import py2exe
|
import py2exe
|
||||||
except ImportError:
|
except ImportError:
|
||||||
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
|
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
|
||||||
print('Cannot import py2exe', file=sys.stderr)
|
print("Cannot import py2exe", file=sys.stderr)
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
py2exe_options = {
|
py2exe_options = {
|
||||||
'bundle_files': 1,
|
"bundle_files": 1,
|
||||||
'compressed': 1,
|
"compressed": 1,
|
||||||
'optimize': 2,
|
"optimize": 2,
|
||||||
'dist_dir': '.',
|
"dist_dir": '.',
|
||||||
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
|
"dll_excludes": ['w9xpopen.exe'],
|
||||||
}
|
}
|
||||||
|
|
||||||
# Get the version from youtube_dl/version.py without importing the package
|
|
||||||
exec(compile(open('youtube_dl/version.py').read(),
|
|
||||||
'youtube_dl/version.py', 'exec'))
|
|
||||||
|
|
||||||
DESCRIPTION = 'YouTube video downloader'
|
|
||||||
LONG_DESCRIPTION = 'Command-line program to download videos from YouTube.com and other video sites'
|
|
||||||
|
|
||||||
py2exe_console = [{
|
py2exe_console = [{
|
||||||
'script': './youtube_dl/__main__.py',
|
"script": "./youtube_dl/__main__.py",
|
||||||
'dest_base': 'youtube-dl',
|
"dest_base": "youtube-dl",
|
||||||
'version': __version__,
|
|
||||||
'description': DESCRIPTION,
|
|
||||||
'comments': LONG_DESCRIPTION,
|
|
||||||
'product_name': 'youtube-dl',
|
|
||||||
'product_version': __version__,
|
|
||||||
}]
|
}]
|
||||||
|
|
||||||
py2exe_params = {
|
py2exe_params = {
|
||||||
'console': py2exe_console,
|
'console': py2exe_console,
|
||||||
'options': {'py2exe': py2exe_options},
|
'options': {"py2exe": py2exe_options},
|
||||||
'zipfile': None
|
'zipfile': None
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
|
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
|
||||||
params = py2exe_params
|
params = py2exe_params
|
||||||
else:
|
else:
|
||||||
files_spec = [
|
|
||||||
('etc/bash_completion.d', ['youtube-dl.bash-completion']),
|
|
||||||
('etc/fish/completions', ['youtube-dl.fish']),
|
|
||||||
('share/doc/youtube_dl', ['README.txt']),
|
|
||||||
('share/man/man1', ['youtube-dl.1'])
|
|
||||||
]
|
|
||||||
root = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
data_files = []
|
|
||||||
for dirname, files in files_spec:
|
|
||||||
resfiles = []
|
|
||||||
for fn in files:
|
|
||||||
if not os.path.exists(fn):
|
|
||||||
warnings.warn('Skipping file %s since it is not present. Type make to build all automatically generated files.' % fn)
|
|
||||||
else:
|
|
||||||
resfiles.append(fn)
|
|
||||||
data_files.append((dirname, resfiles))
|
|
||||||
|
|
||||||
params = {
|
params = {
|
||||||
'data_files': data_files,
|
'data_files': [ # Installing system-wide would require sudo...
|
||||||
|
('etc/bash_completion.d', ['youtube-dl.bash-completion']),
|
||||||
|
('share/doc/youtube_dl', ['README.txt']),
|
||||||
|
('share/man/man1', ['youtube-dl.1'])
|
||||||
|
]
|
||||||
}
|
}
|
||||||
if setuptools_available:
|
if setuptools_available:
|
||||||
params['entry_points'] = {'console_scripts': ['youtube-dl = youtube_dl:main']}
|
params['entry_points'] = {'console_scripts': ['youtube-dl = youtube_dl:main']}
|
||||||
else:
|
else:
|
||||||
params['scripts'] = ['bin/youtube-dl']
|
params['scripts'] = ['bin/youtube-dl']
|
||||||
|
|
||||||
class build_lazy_extractors(Command):
|
# Get the version from youtube_dl/version.py without importing the package
|
||||||
description = 'Build the extractor lazy loading module'
|
exec(compile(open('youtube_dl/version.py').read(),
|
||||||
user_options = []
|
'youtube_dl/version.py', 'exec'))
|
||||||
|
|
||||||
def initialize_options(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def finalize_options(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
spawn(
|
|
||||||
[sys.executable, 'devscripts/make_lazy_extractors.py', 'youtube_dl/extractor/lazy_extractors.py'],
|
|
||||||
dry_run=self.dry_run,
|
|
||||||
)
|
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
name='youtube_dl',
|
name='youtube_dl',
|
||||||
version=__version__,
|
version=__version__,
|
||||||
description=DESCRIPTION,
|
description='YouTube video downloader',
|
||||||
long_description=LONG_DESCRIPTION,
|
long_description='Small command-line program to download videos from'
|
||||||
url='https://github.com/ytdl-org/youtube-dl',
|
' YouTube.com and other video sites.',
|
||||||
|
url='https://github.com/rg3/youtube-dl',
|
||||||
author='Ricardo Garcia',
|
author='Ricardo Garcia',
|
||||||
author_email='ytdl@yt-dl.org',
|
author_email='ytdl@yt-dl.org',
|
||||||
maintainer='Sergey M.',
|
maintainer='Philipp Hagemeister',
|
||||||
maintainer_email='dstftw@gmail.com',
|
maintainer_email='phihag@phihag.de',
|
||||||
license='Unlicense',
|
packages=['youtube_dl', 'youtube_dl.extractor'],
|
||||||
packages=[
|
|
||||||
'youtube_dl',
|
|
||||||
'youtube_dl.extractor', 'youtube_dl.downloader',
|
|
||||||
'youtube_dl.postprocessor'],
|
|
||||||
|
|
||||||
# Provokes warning on most systems (why?!)
|
# Provokes warning on most systems (why?!)
|
||||||
# test_suite = 'nose.collector',
|
# test_suite = 'nose.collector',
|
||||||
# test_requires = ['nosetest'],
|
# test_requires = ['nosetest'],
|
||||||
|
|
||||||
classifiers=[
|
classifiers=[
|
||||||
'Topic :: Multimedia :: Video',
|
"Topic :: Multimedia :: Video",
|
||||||
'Development Status :: 5 - Production/Stable',
|
"Development Status :: 5 - Production/Stable",
|
||||||
'Environment :: Console',
|
"Environment :: Console",
|
||||||
'License :: Public Domain',
|
"License :: Public Domain",
|
||||||
'Programming Language :: Python',
|
"Programming Language :: Python :: 2.6",
|
||||||
'Programming Language :: Python :: 2',
|
"Programming Language :: Python :: 2.7",
|
||||||
'Programming Language :: Python :: 2.6',
|
"Programming Language :: Python :: 3",
|
||||||
'Programming Language :: Python :: 2.7',
|
"Programming Language :: Python :: 3.3"
|
||||||
'Programming Language :: Python :: 3',
|
|
||||||
'Programming Language :: Python :: 3.2',
|
|
||||||
'Programming Language :: Python :: 3.3',
|
|
||||||
'Programming Language :: Python :: 3.4',
|
|
||||||
'Programming Language :: Python :: 3.5',
|
|
||||||
'Programming Language :: Python :: 3.6',
|
|
||||||
'Programming Language :: Python :: 3.7',
|
|
||||||
'Programming Language :: Python :: 3.8',
|
|
||||||
'Programming Language :: Python :: Implementation',
|
|
||||||
'Programming Language :: Python :: Implementation :: CPython',
|
|
||||||
'Programming Language :: Python :: Implementation :: IronPython',
|
|
||||||
'Programming Language :: Python :: Implementation :: Jython',
|
|
||||||
'Programming Language :: Python :: Implementation :: PyPy',
|
|
||||||
],
|
],
|
||||||
|
|
||||||
cmdclass={'build_lazy_extractors': build_lazy_extractors},
|
|
||||||
**params
|
**params
|
||||||
)
|
)
|
||||||
|
229
test/helper.py
229
test/helper.py
@ -1,5 +1,3 @@
|
|||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import errno
|
import errno
|
||||||
import io
|
import io
|
||||||
import hashlib
|
import hashlib
|
||||||
@ -7,31 +5,18 @@ import json
|
|||||||
import os.path
|
import os.path
|
||||||
import re
|
import re
|
||||||
import types
|
import types
|
||||||
import ssl
|
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
import youtube_dl.extractor
|
import youtube_dl.extractor
|
||||||
from youtube_dl import YoutubeDL
|
from youtube_dl import YoutubeDL
|
||||||
from youtube_dl.compat import (
|
from youtube_dl.utils import preferredencoding
|
||||||
compat_os_name,
|
|
||||||
compat_str,
|
|
||||||
)
|
|
||||||
from youtube_dl.utils import (
|
|
||||||
preferredencoding,
|
|
||||||
write_string,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_params(override=None):
|
def get_params(override=None):
|
||||||
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
|
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
|
||||||
"parameters.json")
|
"parameters.json")
|
||||||
LOCAL_PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
|
|
||||||
"local_parameters.json")
|
|
||||||
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
|
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
|
||||||
parameters = json.load(pf)
|
parameters = json.load(pf)
|
||||||
if os.path.exists(LOCAL_PARAMETERS_FILE):
|
|
||||||
with io.open(LOCAL_PARAMETERS_FILE, encoding='utf-8') as pf:
|
|
||||||
parameters.update(json.load(pf))
|
|
||||||
if override:
|
if override:
|
||||||
parameters.update(override)
|
parameters.update(override)
|
||||||
return parameters
|
return parameters
|
||||||
@ -51,11 +36,11 @@ def report_warning(message):
|
|||||||
Print the message to stderr, it will be prefixed with 'WARNING:'
|
Print the message to stderr, it will be prefixed with 'WARNING:'
|
||||||
If stderr is a tty file the 'WARNING:' will be colored
|
If stderr is a tty file the 'WARNING:' will be colored
|
||||||
'''
|
'''
|
||||||
if sys.stderr.isatty() and compat_os_name != 'nt':
|
if sys.stderr.isatty() and os.name != 'nt':
|
||||||
_msg_header = '\033[0;33mWARNING:\033[0m'
|
_msg_header = u'\033[0;33mWARNING:\033[0m'
|
||||||
else:
|
else:
|
||||||
_msg_header = 'WARNING:'
|
_msg_header = u'WARNING:'
|
||||||
output = '%s %s\n' % (_msg_header, message)
|
output = u'%s %s\n' % (_msg_header, message)
|
||||||
if 'b' in getattr(sys.stderr, 'mode', '') or sys.version_info[0] < 3:
|
if 'b' in getattr(sys.stderr, 'mode', '') or sys.version_info[0] < 3:
|
||||||
output = output.encode(preferredencoding())
|
output = output.encode(preferredencoding())
|
||||||
sys.stderr.write(output)
|
sys.stderr.write(output)
|
||||||
@ -66,9 +51,9 @@ class FakeYDL(YoutubeDL):
|
|||||||
# Different instances of the downloader can't share the same dictionary
|
# Different instances of the downloader can't share the same dictionary
|
||||||
# some test set the "sublang" parameter, which would break the md5 checks.
|
# some test set the "sublang" parameter, which would break the md5 checks.
|
||||||
params = get_params(override=override)
|
params = get_params(override=override)
|
||||||
super(FakeYDL, self).__init__(params, auto_init=False)
|
super(FakeYDL, self).__init__(params)
|
||||||
self.result = []
|
self.result = []
|
||||||
|
|
||||||
def to_screen(self, s, skip_eol=None):
|
def to_screen(self, s, skip_eol=None):
|
||||||
print(s)
|
print(s)
|
||||||
|
|
||||||
@ -81,202 +66,20 @@ class FakeYDL(YoutubeDL):
|
|||||||
def expect_warning(self, regex):
|
def expect_warning(self, regex):
|
||||||
# Silence an expected warning matching a regex
|
# Silence an expected warning matching a regex
|
||||||
old_report_warning = self.report_warning
|
old_report_warning = self.report_warning
|
||||||
|
|
||||||
def report_warning(self, message):
|
def report_warning(self, message):
|
||||||
if re.match(regex, message):
|
if re.match(regex, message): return
|
||||||
return
|
|
||||||
old_report_warning(message)
|
old_report_warning(message)
|
||||||
self.report_warning = types.MethodType(report_warning, self)
|
self.report_warning = types.MethodType(report_warning, self)
|
||||||
|
|
||||||
|
def get_testcases():
|
||||||
def gettestcases(include_onlymatching=False):
|
|
||||||
for ie in youtube_dl.extractor.gen_extractors():
|
for ie in youtube_dl.extractor.gen_extractors():
|
||||||
for tc in ie.get_testcases(include_onlymatching):
|
t = getattr(ie, '_TEST', None)
|
||||||
yield tc
|
if t:
|
||||||
|
t['name'] = type(ie).__name__[:-len('IE')]
|
||||||
|
yield t
|
||||||
|
for t in getattr(ie, '_TESTS', []):
|
||||||
|
t['name'] = type(ie).__name__[:-len('IE')]
|
||||||
|
yield t
|
||||||
|
|
||||||
|
|
||||||
md5 = lambda s: hashlib.md5(s.encode('utf-8')).hexdigest()
|
md5 = lambda s: hashlib.md5(s.encode('utf-8')).hexdigest()
|
||||||
|
|
||||||
|
|
||||||
def expect_value(self, got, expected, field):
|
|
||||||
if isinstance(expected, compat_str) and expected.startswith('re:'):
|
|
||||||
match_str = expected[len('re:'):]
|
|
||||||
match_rex = re.compile(match_str)
|
|
||||||
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, compat_str),
|
|
||||||
'Expected a %s object, but got %s for field %s' % (
|
|
||||||
compat_str.__name__, type(got).__name__, field))
|
|
||||||
self.assertTrue(
|
|
||||||
match_rex.match(got),
|
|
||||||
'field %s (value: %r) should match %r' % (field, got, match_str))
|
|
||||||
elif isinstance(expected, compat_str) and expected.startswith('startswith:'):
|
|
||||||
start_str = expected[len('startswith:'):]
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, compat_str),
|
|
||||||
'Expected a %s object, but got %s for field %s' % (
|
|
||||||
compat_str.__name__, type(got).__name__, field))
|
|
||||||
self.assertTrue(
|
|
||||||
got.startswith(start_str),
|
|
||||||
'field %s (value: %r) should start with %r' % (field, got, start_str))
|
|
||||||
elif isinstance(expected, compat_str) and expected.startswith('contains:'):
|
|
||||||
contains_str = expected[len('contains:'):]
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, compat_str),
|
|
||||||
'Expected a %s object, but got %s for field %s' % (
|
|
||||||
compat_str.__name__, type(got).__name__, field))
|
|
||||||
self.assertTrue(
|
|
||||||
contains_str in got,
|
|
||||||
'field %s (value: %r) should contain %r' % (field, got, contains_str))
|
|
||||||
elif isinstance(expected, type):
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, expected),
|
|
||||||
'Expected type %r for field %s, but got value %r of type %r' % (expected, field, got, type(got)))
|
|
||||||
elif isinstance(expected, dict) and isinstance(got, dict):
|
|
||||||
expect_dict(self, got, expected)
|
|
||||||
elif isinstance(expected, list) and isinstance(got, list):
|
|
||||||
self.assertEqual(
|
|
||||||
len(expected), len(got),
|
|
||||||
'Expect a list of length %d, but got a list of length %d for field %s' % (
|
|
||||||
len(expected), len(got), field))
|
|
||||||
for index, (item_got, item_expected) in enumerate(zip(got, expected)):
|
|
||||||
type_got = type(item_got)
|
|
||||||
type_expected = type(item_expected)
|
|
||||||
self.assertEqual(
|
|
||||||
type_expected, type_got,
|
|
||||||
'Type mismatch for list item at index %d for field %s, expected %r, got %r' % (
|
|
||||||
index, field, type_expected, type_got))
|
|
||||||
expect_value(self, item_got, item_expected, field)
|
|
||||||
else:
|
|
||||||
if isinstance(expected, compat_str) and expected.startswith('md5:'):
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, compat_str),
|
|
||||||
'Expected field %s to be a unicode object, but got value %r of type %r' % (field, got, type(got)))
|
|
||||||
got = 'md5:' + md5(got)
|
|
||||||
elif isinstance(expected, compat_str) and re.match(r'^(?:min|max)?count:\d+', expected):
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, (list, dict)),
|
|
||||||
'Expected field %s to be a list or a dict, but it is of type %s' % (
|
|
||||||
field, type(got).__name__))
|
|
||||||
op, _, expected_num = expected.partition(':')
|
|
||||||
expected_num = int(expected_num)
|
|
||||||
if op == 'mincount':
|
|
||||||
assert_func = assertGreaterEqual
|
|
||||||
msg_tmpl = 'Expected %d items in field %s, but only got %d'
|
|
||||||
elif op == 'maxcount':
|
|
||||||
assert_func = assertLessEqual
|
|
||||||
msg_tmpl = 'Expected maximum %d items in field %s, but got %d'
|
|
||||||
elif op == 'count':
|
|
||||||
assert_func = assertEqual
|
|
||||||
msg_tmpl = 'Expected exactly %d items in field %s, but got %d'
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
assert_func(
|
|
||||||
self, len(got), expected_num,
|
|
||||||
msg_tmpl % (expected_num, field, len(got)))
|
|
||||||
return
|
|
||||||
self.assertEqual(
|
|
||||||
expected, got,
|
|
||||||
'Invalid value for field %s, expected %r, got %r' % (field, expected, got))
|
|
||||||
|
|
||||||
|
|
||||||
def expect_dict(self, got_dict, expected_dict):
|
|
||||||
for info_field, expected in expected_dict.items():
|
|
||||||
got = got_dict.get(info_field)
|
|
||||||
expect_value(self, got, expected, info_field)
|
|
||||||
|
|
||||||
|
|
||||||
def expect_info_dict(self, got_dict, expected_dict):
|
|
||||||
expect_dict(self, got_dict, expected_dict)
|
|
||||||
# Check for the presence of mandatory fields
|
|
||||||
if got_dict.get('_type') not in ('playlist', 'multi_video'):
|
|
||||||
for key in ('id', 'url', 'title', 'ext'):
|
|
||||||
self.assertTrue(got_dict.get(key), 'Missing mandatory field %s' % key)
|
|
||||||
# Check for mandatory fields that are automatically set by YoutubeDL
|
|
||||||
for key in ['webpage_url', 'extractor', 'extractor_key']:
|
|
||||||
self.assertTrue(got_dict.get(key), 'Missing field: %s' % key)
|
|
||||||
|
|
||||||
# Are checkable fields missing from the test case definition?
|
|
||||||
test_info_dict = dict((key, value if not isinstance(value, compat_str) or len(value) < 250 else 'md5:' + md5(value))
|
|
||||||
for key, value in got_dict.items()
|
|
||||||
if value and key in ('id', 'title', 'description', 'uploader', 'upload_date', 'timestamp', 'uploader_id', 'location', 'age_limit'))
|
|
||||||
missing_keys = set(test_info_dict.keys()) - set(expected_dict.keys())
|
|
||||||
if missing_keys:
|
|
||||||
def _repr(v):
|
|
||||||
if isinstance(v, compat_str):
|
|
||||||
return "'%s'" % v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n')
|
|
||||||
else:
|
|
||||||
return repr(v)
|
|
||||||
info_dict_str = ''
|
|
||||||
if len(missing_keys) != len(expected_dict):
|
|
||||||
info_dict_str += ''.join(
|
|
||||||
' %s: %s,\n' % (_repr(k), _repr(v))
|
|
||||||
for k, v in test_info_dict.items() if k not in missing_keys)
|
|
||||||
|
|
||||||
if info_dict_str:
|
|
||||||
info_dict_str += '\n'
|
|
||||||
info_dict_str += ''.join(
|
|
||||||
' %s: %s,\n' % (_repr(k), _repr(test_info_dict[k]))
|
|
||||||
for k in missing_keys)
|
|
||||||
write_string(
|
|
||||||
'\n\'info_dict\': {\n' + info_dict_str + '},\n', out=sys.stderr)
|
|
||||||
self.assertFalse(
|
|
||||||
missing_keys,
|
|
||||||
'Missing keys in test definition: %s' % (
|
|
||||||
', '.join(sorted(missing_keys))))
|
|
||||||
|
|
||||||
|
|
||||||
def assertRegexpMatches(self, text, regexp, msg=None):
|
|
||||||
if hasattr(self, 'assertRegexp'):
|
|
||||||
return self.assertRegexp(text, regexp, msg)
|
|
||||||
else:
|
|
||||||
m = re.match(regexp, text)
|
|
||||||
if not m:
|
|
||||||
note = 'Regexp didn\'t match: %r not found' % (regexp)
|
|
||||||
if len(text) < 1000:
|
|
||||||
note += ' in %r' % text
|
|
||||||
if msg is None:
|
|
||||||
msg = note
|
|
||||||
else:
|
|
||||||
msg = note + ', ' + msg
|
|
||||||
self.assertTrue(m, msg)
|
|
||||||
|
|
||||||
|
|
||||||
def assertGreaterEqual(self, got, expected, msg=None):
|
|
||||||
if not (got >= expected):
|
|
||||||
if msg is None:
|
|
||||||
msg = '%r not greater than or equal to %r' % (got, expected)
|
|
||||||
self.assertTrue(got >= expected, msg)
|
|
||||||
|
|
||||||
|
|
||||||
def assertLessEqual(self, got, expected, msg=None):
|
|
||||||
if not (got <= expected):
|
|
||||||
if msg is None:
|
|
||||||
msg = '%r not less than or equal to %r' % (got, expected)
|
|
||||||
self.assertTrue(got <= expected, msg)
|
|
||||||
|
|
||||||
|
|
||||||
def assertEqual(self, got, expected, msg=None):
|
|
||||||
if not (got == expected):
|
|
||||||
if msg is None:
|
|
||||||
msg = '%r not equal to %r' % (got, expected)
|
|
||||||
self.assertTrue(got == expected, msg)
|
|
||||||
|
|
||||||
|
|
||||||
def expect_warnings(ydl, warnings_re):
|
|
||||||
real_warning = ydl.report_warning
|
|
||||||
|
|
||||||
def _report_warning(w):
|
|
||||||
if not any(re.search(w_re, w) for w_re in warnings_re):
|
|
||||||
real_warning(w)
|
|
||||||
|
|
||||||
ydl.report_warning = _report_warning
|
|
||||||
|
|
||||||
|
|
||||||
def http_server_port(httpd):
|
|
||||||
if os.name == 'java' and isinstance(httpd.socket, ssl.SSLSocket):
|
|
||||||
# In Jython SSLSocket is not a subclass of socket.socket
|
|
||||||
sock = httpd.socket.sock
|
|
||||||
else:
|
|
||||||
sock = httpd.socket
|
|
||||||
return sock.getsockname()[1]
|
|
||||||
|
@ -7,7 +7,8 @@
|
|||||||
"forcethumbnail": false,
|
"forcethumbnail": false,
|
||||||
"forcetitle": false,
|
"forcetitle": false,
|
||||||
"forceurl": false,
|
"forceurl": false,
|
||||||
"format": "best",
|
"format": null,
|
||||||
|
"format_limit": null,
|
||||||
"ignoreerrors": false,
|
"ignoreerrors": false,
|
||||||
"listformats": null,
|
"listformats": null,
|
||||||
"logtostderr": false,
|
"logtostderr": false,
|
||||||
@ -26,8 +27,9 @@
|
|||||||
"rejecttitle": null,
|
"rejecttitle": null,
|
||||||
"retries": 10,
|
"retries": 10,
|
||||||
"simulate": false,
|
"simulate": false,
|
||||||
|
"skip_download": false,
|
||||||
"subtitleslang": null,
|
"subtitleslang": null,
|
||||||
"subtitlesformat": "best",
|
"subtitlesformat": "srt",
|
||||||
"test": true,
|
"test": true,
|
||||||
"updatetime": true,
|
"updatetime": true,
|
||||||
"usenetrc": false,
|
"usenetrc": false,
|
||||||
@ -37,7 +39,5 @@
|
|||||||
"writeinfojson": true,
|
"writeinfojson": true,
|
||||||
"writesubtitles": false,
|
"writesubtitles": false,
|
||||||
"allsubtitles": false,
|
"allsubtitles": false,
|
||||||
"listssubtitles": false,
|
"listssubtitles": false
|
||||||
"socket_timeout": 20,
|
|
||||||
"fixup": "never"
|
|
||||||
}
|
}
|
||||||
|
1
test/swftests/.gitignore
vendored
1
test/swftests/.gitignore
vendored
@ -1 +0,0 @@
|
|||||||
*.swf
|
|
@ -1,19 +0,0 @@
|
|||||||
// input: [["a", "b", "c", "d"]]
|
|
||||||
// output: ["c", "b", "a", "d"]
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ArrayAccess {
|
|
||||||
public static function main(ar:Array):Array {
|
|
||||||
var aa:ArrayAccess = new ArrayAccess();
|
|
||||||
return aa.f(ar, 2);
|
|
||||||
}
|
|
||||||
|
|
||||||
private function f(ar:Array, num:Number):Array{
|
|
||||||
var x:String = ar[0];
|
|
||||||
var y:String = ar[num % ar.length];
|
|
||||||
ar[0] = y;
|
|
||||||
ar[num] = x;
|
|
||||||
return ar;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,17 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 121
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ClassCall {
|
|
||||||
public static function main():int{
|
|
||||||
var f:OtherClass = new OtherClass();
|
|
||||||
return f.func(100,20);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
class OtherClass {
|
|
||||||
public function func(x: int, y: int):int {
|
|
||||||
return x+y+1;
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,15 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 0
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ClassConstruction {
|
|
||||||
public static function main():int{
|
|
||||||
var f:Foo = new Foo();
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
class Foo {
|
|
||||||
|
|
||||||
}
|
|
@ -1,18 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 4
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ConstArrayAccess {
|
|
||||||
private static const x:int = 2;
|
|
||||||
private static const ar:Array = ["42", "3411"];
|
|
||||||
|
|
||||||
public static function main():int{
|
|
||||||
var c:ConstArrayAccess = new ConstArrayAccess();
|
|
||||||
return c.f();
|
|
||||||
}
|
|
||||||
|
|
||||||
public function f(): int {
|
|
||||||
return ar[1].length;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,12 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 2
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ConstantInt {
|
|
||||||
private static const x:int = 2;
|
|
||||||
|
|
||||||
public static function main():int{
|
|
||||||
return x;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,10 +0,0 @@
|
|||||||
// input: [{"x": 1, "y": 2}]
|
|
||||||
// output: 3
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class DictCall {
|
|
||||||
public static function main(d:Object):int{
|
|
||||||
return d.x + d.y;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,10 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: false
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class EqualsOperator {
|
|
||||||
public static function main():Boolean{
|
|
||||||
return 1 == 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,13 +0,0 @@
|
|||||||
// input: [1, 2]
|
|
||||||
// output: 3
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class LocalVars {
|
|
||||||
public static function main(a:int, b:int):int{
|
|
||||||
var c:int = a + b + b;
|
|
||||||
var d:int = c - b;
|
|
||||||
var e:int = d;
|
|
||||||
return e;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,22 +0,0 @@
|
|||||||
// input: [1]
|
|
||||||
// output: 2
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class MemberAssignment {
|
|
||||||
public var v:int;
|
|
||||||
|
|
||||||
public function g():int {
|
|
||||||
return this.v;
|
|
||||||
}
|
|
||||||
|
|
||||||
public function f(a:int):int{
|
|
||||||
this.v = a;
|
|
||||||
return this.v + this.g();
|
|
||||||
}
|
|
||||||
|
|
||||||
public static function main(a:int): int {
|
|
||||||
var v:MemberAssignment = new MemberAssignment();
|
|
||||||
return v.f(a);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,24 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 123
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class NeOperator {
|
|
||||||
public static function main(): int {
|
|
||||||
var res:int = 0;
|
|
||||||
if (1 != 2) {
|
|
||||||
res += 3;
|
|
||||||
} else {
|
|
||||||
res += 4;
|
|
||||||
}
|
|
||||||
if (2 != 2) {
|
|
||||||
res += 10;
|
|
||||||
} else {
|
|
||||||
res += 20;
|
|
||||||
}
|
|
||||||
if (9 == 9) {
|
|
||||||
res += 100;
|
|
||||||
}
|
|
||||||
return res;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,21 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 9
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class PrivateCall {
|
|
||||||
public static function main():int{
|
|
||||||
var f:OtherClass = new OtherClass();
|
|
||||||
return f.func();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
class OtherClass {
|
|
||||||
private function pf():int {
|
|
||||||
return 9;
|
|
||||||
}
|
|
||||||
|
|
||||||
public function func():int {
|
|
||||||
return this.pf();
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,22 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 9
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class PrivateVoidCall {
|
|
||||||
public static function main():int{
|
|
||||||
var f:OtherClass = new OtherClass();
|
|
||||||
f.func();
|
|
||||||
return 9;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
class OtherClass {
|
|
||||||
private function pf():void {
|
|
||||||
;
|
|
||||||
}
|
|
||||||
|
|
||||||
public function func():void {
|
|
||||||
this.pf();
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,13 +0,0 @@
|
|||||||
// input: [1]
|
|
||||||
// output: 1
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StaticAssignment {
|
|
||||||
public static var v:int;
|
|
||||||
|
|
||||||
public static function main(a:int):int{
|
|
||||||
v = a;
|
|
||||||
return v;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,16 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 1
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StaticRetrieval {
|
|
||||||
public static var v:int;
|
|
||||||
|
|
||||||
public static function main():int{
|
|
||||||
if (v) {
|
|
||||||
return 0;
|
|
||||||
} else {
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,11 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 3
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StringBasics {
|
|
||||||
public static function main():int{
|
|
||||||
var s:String = "abc";
|
|
||||||
return s.length;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,11 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 9897
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StringCharCodeAt {
|
|
||||||
public static function main():int{
|
|
||||||
var s:String = "abc";
|
|
||||||
return s.charCodeAt(1) * 100 + s.charCodeAt();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,11 +0,0 @@
|
|||||||
// input: []
|
|
||||||
// output: 2
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StringConversion {
|
|
||||||
public static function main():int{
|
|
||||||
var s:String = String(99);
|
|
||||||
return s.length;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
File diff suppressed because it is too large
Load Diff
@ -1,7 +1,4 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
# Allow direct execution
|
||||||
import os
|
import os
|
||||||
@ -9,17 +6,7 @@ import sys
|
|||||||
import unittest
|
import unittest
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
import copy
|
from test.helper import FakeYDL
|
||||||
|
|
||||||
from test.helper import FakeYDL, assertRegexpMatches
|
|
||||||
from youtube_dl import YoutubeDL
|
|
||||||
from youtube_dl.compat import compat_str, compat_urllib_error
|
|
||||||
from youtube_dl.extractor import YoutubeIE
|
|
||||||
from youtube_dl.extractor.common import InfoExtractor
|
|
||||||
from youtube_dl.postprocessor.common import PostProcessor
|
|
||||||
from youtube_dl.utils import ExtractorError, match_filter_func
|
|
||||||
|
|
||||||
TEST_URL = 'http://localhost/sample.mp4'
|
|
||||||
|
|
||||||
|
|
||||||
class YDL(FakeYDL):
|
class YDL(FakeYDL):
|
||||||
@ -35,580 +22,111 @@ class YDL(FakeYDL):
|
|||||||
self.msgs.append(msg)
|
self.msgs.append(msg)
|
||||||
|
|
||||||
|
|
||||||
def _make_result(formats, **kwargs):
|
|
||||||
res = {
|
|
||||||
'formats': formats,
|
|
||||||
'id': 'testid',
|
|
||||||
'title': 'testttitle',
|
|
||||||
'extractor': 'testex',
|
|
||||||
'extractor_key': 'TestEx',
|
|
||||||
}
|
|
||||||
res.update(**kwargs)
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
class TestFormatSelection(unittest.TestCase):
|
class TestFormatSelection(unittest.TestCase):
|
||||||
def test_prefer_free_formats(self):
|
def test_prefer_free_formats(self):
|
||||||
# Same resolution => download webm
|
# Same resolution => download webm
|
||||||
ydl = YDL()
|
ydl = YDL()
|
||||||
ydl.params['prefer_free_formats'] = True
|
ydl.params['prefer_free_formats'] = True
|
||||||
formats = [
|
formats = [
|
||||||
{'ext': 'webm', 'height': 460, 'url': TEST_URL},
|
{u'ext': u'webm', u'height': 460},
|
||||||
{'ext': 'mp4', 'height': 460, 'url': TEST_URL},
|
{u'ext': u'mp4', u'height': 460},
|
||||||
]
|
]
|
||||||
info_dict = _make_result(formats)
|
info_dict = {u'formats': formats, u'extractor': u'test'}
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['ext'], 'webm')
|
self.assertEqual(downloaded[u'ext'], u'webm')
|
||||||
|
|
||||||
# Different resolution => download best quality (mp4)
|
# Different resolution => download best quality (mp4)
|
||||||
ydl = YDL()
|
ydl = YDL()
|
||||||
ydl.params['prefer_free_formats'] = True
|
ydl.params['prefer_free_formats'] = True
|
||||||
formats = [
|
formats = [
|
||||||
{'ext': 'webm', 'height': 720, 'url': TEST_URL},
|
{u'ext': u'webm', u'height': 720},
|
||||||
{'ext': 'mp4', 'height': 1080, 'url': TEST_URL},
|
{u'ext': u'mp4', u'height': 1080},
|
||||||
]
|
]
|
||||||
info_dict['formats'] = formats
|
info_dict[u'formats'] = formats
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['ext'], 'mp4')
|
self.assertEqual(downloaded[u'ext'], u'mp4')
|
||||||
|
|
||||||
# No prefer_free_formats => prefer mp4 and flv for greater compatibility
|
# No prefer_free_formats => keep original formats order
|
||||||
ydl = YDL()
|
ydl = YDL()
|
||||||
ydl.params['prefer_free_formats'] = False
|
ydl.params['prefer_free_formats'] = False
|
||||||
formats = [
|
formats = [
|
||||||
{'ext': 'webm', 'height': 720, 'url': TEST_URL},
|
{u'ext': u'webm', u'height': 720},
|
||||||
{'ext': 'mp4', 'height': 720, 'url': TEST_URL},
|
{u'ext': u'flv', u'height': 720},
|
||||||
{'ext': 'flv', 'height': 720, 'url': TEST_URL},
|
|
||||||
]
|
]
|
||||||
info_dict['formats'] = formats
|
info_dict[u'formats'] = formats
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['ext'], 'mp4')
|
self.assertEqual(downloaded[u'ext'], u'flv')
|
||||||
|
|
||||||
|
def test_format_limit(self):
|
||||||
|
formats = [
|
||||||
|
{u'format_id': u'meh', u'url': u'http://example.com/meh'},
|
||||||
|
{u'format_id': u'good', u'url': u'http://example.com/good'},
|
||||||
|
{u'format_id': u'great', u'url': u'http://example.com/great'},
|
||||||
|
{u'format_id': u'excellent', u'url': u'http://example.com/exc'},
|
||||||
|
]
|
||||||
|
info_dict = {
|
||||||
|
u'formats': formats, u'extractor': u'test', 'id': 'testvid'}
|
||||||
|
|
||||||
ydl = YDL()
|
ydl = YDL()
|
||||||
ydl.params['prefer_free_formats'] = False
|
|
||||||
formats = [
|
|
||||||
{'ext': 'flv', 'height': 720, 'url': TEST_URL},
|
|
||||||
{'ext': 'webm', 'height': 720, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict['formats'] = formats
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['ext'], 'flv')
|
self.assertEqual(downloaded[u'format_id'], u'excellent')
|
||||||
|
|
||||||
|
ydl = YDL({'format_limit': 'good'})
|
||||||
|
assert ydl.params['format_limit'] == 'good'
|
||||||
|
ydl.process_ie_result(info_dict)
|
||||||
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
|
self.assertEqual(downloaded[u'format_id'], u'good')
|
||||||
|
|
||||||
|
ydl = YDL({'format_limit': 'great', 'format': 'all'})
|
||||||
|
ydl.process_ie_result(info_dict)
|
||||||
|
self.assertEqual(ydl.downloaded_info_dicts[0][u'format_id'], u'meh')
|
||||||
|
self.assertEqual(ydl.downloaded_info_dicts[1][u'format_id'], u'good')
|
||||||
|
self.assertEqual(ydl.downloaded_info_dicts[2][u'format_id'], u'great')
|
||||||
|
self.assertTrue('3' in ydl.msgs[0])
|
||||||
|
|
||||||
|
ydl = YDL()
|
||||||
|
ydl.params['format_limit'] = 'excellent'
|
||||||
|
ydl.process_ie_result(info_dict)
|
||||||
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
|
self.assertEqual(downloaded[u'format_id'], u'excellent')
|
||||||
|
|
||||||
def test_format_selection(self):
|
def test_format_selection(self):
|
||||||
formats = [
|
formats = [
|
||||||
{'format_id': '35', 'ext': 'mp4', 'preference': 1, 'url': TEST_URL},
|
{u'format_id': u'35', u'ext': u'mp4'},
|
||||||
{'format_id': 'example-with-dashes', 'ext': 'webm', 'preference': 1, 'url': TEST_URL},
|
{u'format_id': u'45', u'ext': u'webm'},
|
||||||
{'format_id': '45', 'ext': 'webm', 'preference': 2, 'url': TEST_URL},
|
{u'format_id': u'47', u'ext': u'webm'},
|
||||||
{'format_id': '47', 'ext': 'webm', 'preference': 3, 'url': TEST_URL},
|
{u'format_id': u'2', u'ext': u'flv'},
|
||||||
{'format_id': '2', 'ext': 'flv', 'preference': 4, 'url': TEST_URL},
|
|
||||||
]
|
]
|
||||||
info_dict = _make_result(formats)
|
info_dict = {u'formats': formats, u'extractor': u'test'}
|
||||||
|
|
||||||
ydl = YDL({'format': '20/47'})
|
ydl = YDL({'format': u'20/47'})
|
||||||
ydl.process_ie_result(info_dict.copy())
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['format_id'], '47')
|
self.assertEqual(downloaded['format_id'], u'47')
|
||||||
|
|
||||||
ydl = YDL({'format': '20/71/worst'})
|
ydl = YDL({'format': u'20/71/worst'})
|
||||||
ydl.process_ie_result(info_dict.copy())
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['format_id'], '35')
|
self.assertEqual(downloaded['format_id'], u'35')
|
||||||
|
|
||||||
ydl = YDL()
|
ydl = YDL()
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '2')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'webm/mp4'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '47')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '3gp/40/mp4'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '35')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'example-with-dashes'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'example-with-dashes')
|
|
||||||
|
|
||||||
def test_format_selection_audio(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'audio-low', 'ext': 'webm', 'preference': 1, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'audio-mid', 'ext': 'webm', 'preference': 2, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'audio-high', 'ext': 'flv', 'preference': 3, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'vid', 'ext': 'mp4', 'preference': 4, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestaudio'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'audio-high')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'worstaudio'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'audio-low')
|
|
||||||
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'vid-low', 'ext': 'mp4', 'preference': 1, 'url': TEST_URL},
|
|
||||||
{'format_id': 'vid-high', 'ext': 'mp4', 'preference': 2, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestaudio/worstaudio/best'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'vid-high')
|
|
||||||
|
|
||||||
def test_format_selection_audio_exts(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'mp3-64', 'ext': 'mp3', 'abr': 64, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
{'format_id': 'ogg-64', 'ext': 'ogg', 'abr': 64, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
{'format_id': 'aac-64', 'ext': 'aac', 'abr': 64, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
{'format_id': 'mp3-32', 'ext': 'mp3', 'abr': 32, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
{'format_id': 'aac-32', 'ext': 'aac', 'abr': 32, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
]
|
|
||||||
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
ydl = YDL({'format': 'best'})
|
|
||||||
ie = YoutubeIE(ydl)
|
|
||||||
ie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(copy.deepcopy(info_dict))
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'aac-64')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'mp3'})
|
|
||||||
ie = YoutubeIE(ydl)
|
|
||||||
ie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(copy.deepcopy(info_dict))
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'mp3-64')
|
|
||||||
|
|
||||||
ydl = YDL({'prefer_free_formats': True})
|
|
||||||
ie = YoutubeIE(ydl)
|
|
||||||
ie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(copy.deepcopy(info_dict))
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'ogg-64')
|
|
||||||
|
|
||||||
def test_format_selection_video(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'dash-video-low', 'ext': 'mp4', 'preference': 1, 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'dash-video-high', 'ext': 'mp4', 'preference': 2, 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'vid', 'ext': 'mp4', 'preference': 3, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestvideo'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'dash-video-high')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'worstvideo'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'dash-video-low')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestvideo[format_id^=dash][format_id$=low]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'dash-video-low')
|
|
||||||
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'vid-vcodec-dot', 'ext': 'mp4', 'preference': 1, 'vcodec': 'avc1.123456', 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestvideo[vcodec=avc1.123456]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'vid-vcodec-dot')
|
|
||||||
|
|
||||||
def test_format_selection_string_ops(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'abc-cba', 'ext': 'mp4', 'url': TEST_URL},
|
|
||||||
{'format_id': 'zxc-cxz', 'ext': 'webm', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
# equals (=)
|
|
||||||
ydl = YDL({'format': '[format_id=abc-cba]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'abc-cba')
|
|
||||||
|
|
||||||
# does not equal (!=)
|
|
||||||
ydl = YDL({'format': '[format_id!=abc-cba]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'zxc-cxz')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!=abc-cba][format_id!=zxc-cxz]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
# starts with (^=)
|
|
||||||
ydl = YDL({'format': '[format_id^=abc]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'abc-cba')
|
|
||||||
|
|
||||||
# does not start with (!^=)
|
|
||||||
ydl = YDL({'format': '[format_id!^=abc]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'zxc-cxz')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!^=abc][format_id!^=zxc]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
# ends with ($=)
|
|
||||||
ydl = YDL({'format': '[format_id$=cba]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'abc-cba')
|
|
||||||
|
|
||||||
# does not end with (!$=)
|
|
||||||
ydl = YDL({'format': '[format_id!$=cba]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'zxc-cxz')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!$=cba][format_id!$=cxz]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
# contains (*=)
|
|
||||||
ydl = YDL({'format': '[format_id*=bc-cb]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'abc-cba')
|
|
||||||
|
|
||||||
# does not contain (!*=)
|
|
||||||
ydl = YDL({'format': '[format_id!*=bc-cb]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'zxc-cxz')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!*=abc][format_id!*=zxc]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!*=-]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
def test_youtube_format_selection(self):
|
|
||||||
order = [
|
|
||||||
'38', '37', '46', '22', '45', '35', '44', '18', '34', '43', '6', '5', '17', '36', '13',
|
|
||||||
# Apple HTTP Live Streaming
|
|
||||||
'96', '95', '94', '93', '92', '132', '151',
|
|
||||||
# 3D
|
|
||||||
'85', '84', '102', '83', '101', '82', '100',
|
|
||||||
# Dash video
|
|
||||||
'137', '248', '136', '247', '135', '246',
|
|
||||||
'245', '244', '134', '243', '133', '242', '160',
|
|
||||||
# Dash audio
|
|
||||||
'141', '172', '140', '171', '139',
|
|
||||||
]
|
|
||||||
|
|
||||||
def format_info(f_id):
|
|
||||||
info = YoutubeIE._formats[f_id].copy()
|
|
||||||
|
|
||||||
# XXX: In real cases InfoExtractor._parse_mpd_formats() fills up 'acodec'
|
|
||||||
# and 'vcodec', while in tests such information is incomplete since
|
|
||||||
# commit a6c2c24479e5f4827ceb06f64d855329c0a6f593
|
|
||||||
# test_YoutubeDL.test_youtube_format_selection is broken without
|
|
||||||
# this fix
|
|
||||||
if 'acodec' in info and 'vcodec' not in info:
|
|
||||||
info['vcodec'] = 'none'
|
|
||||||
elif 'vcodec' in info and 'acodec' not in info:
|
|
||||||
info['acodec'] = 'none'
|
|
||||||
|
|
||||||
info['format_id'] = f_id
|
|
||||||
info['url'] = 'url:' + f_id
|
|
||||||
return info
|
|
||||||
formats_order = [format_info(f_id) for f_id in order]
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': 'bestvideo+bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['format_id'], '137+141')
|
self.assertEqual(downloaded['format_id'], u'2')
|
||||||
self.assertEqual(downloaded['ext'], 'mp4')
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
ydl = YDL({'format': u'webm/mp4'})
|
||||||
ydl = YDL({'format': 'bestvideo[height>=999999]+bestaudio/best'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['format_id'], '38')
|
self.assertEqual(downloaded['format_id'], u'47')
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
ydl = YDL({'format': u'3gp/40/mp4'})
|
||||||
ydl = YDL({'format': 'bestvideo/best,bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['137', '141'])
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': '(bestvideo[ext=mp4],bestvideo[ext=webm])+bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['137+141', '248+141'])
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': '(bestvideo[ext=mp4],bestvideo[ext=webm])[height<=720]+bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['136+141', '247+141'])
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': '(bestvideo[ext=none]/bestvideo[ext=webm])+bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['248+141'])
|
|
||||||
|
|
||||||
for f1, f2 in zip(formats_order, formats_order[1:]):
|
|
||||||
info_dict = _make_result([f1, f2], extractor='youtube')
|
|
||||||
ydl = YDL({'format': 'best/bestvideo'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], f1['format_id'])
|
|
||||||
|
|
||||||
info_dict = _make_result([f2, f1], extractor='youtube')
|
|
||||||
ydl = YDL({'format': 'best/bestvideo'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], f1['format_id'])
|
|
||||||
|
|
||||||
def test_audio_only_extractor_format_selection(self):
|
|
||||||
# For extractors with incomplete formats (all formats are audio-only or
|
|
||||||
# video-only) best and worst should fallback to corresponding best/worst
|
|
||||||
# video-only or audio-only formats (as per
|
|
||||||
# https://github.com/ytdl-org/youtube-dl/pull/5556)
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'low', 'ext': 'mp3', 'preference': 1, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'high', 'ext': 'mp3', 'preference': 2, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'high')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'worst'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'low')
|
|
||||||
|
|
||||||
def test_format_not_available(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'regular', 'ext': 'mp4', 'height': 360, 'url': TEST_URL},
|
|
||||||
{'format_id': 'video', 'ext': 'mp4', 'height': 720, 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
# This must fail since complete video-audio format does not match filter
|
|
||||||
# and extractor does not provide incomplete only formats (i.e. only
|
|
||||||
# video-only or audio-only).
|
|
||||||
ydl = YDL({'format': 'best[height>360]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
def test_format_selection_issue_10083(self):
|
|
||||||
# See https://github.com/ytdl-org/youtube-dl/issues/10083
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'regular', 'height': 360, 'url': TEST_URL},
|
|
||||||
{'format_id': 'video', 'height': 720, 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'audio', 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[height>360]/bestvideo[height>360]+bestaudio'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
self.assertEqual(ydl.downloaded_info_dicts[0]['format_id'], 'video+audio')
|
|
||||||
|
|
||||||
def test_invalid_format_specs(self):
|
|
||||||
def assert_syntax_error(format_spec):
|
|
||||||
ydl = YDL({'format': format_spec})
|
|
||||||
info_dict = _make_result([{'format_id': 'foo', 'url': TEST_URL}])
|
|
||||||
self.assertRaises(SyntaxError, ydl.process_ie_result, info_dict)
|
|
||||||
|
|
||||||
assert_syntax_error('bestvideo,,best')
|
|
||||||
assert_syntax_error('+bestaudio')
|
|
||||||
assert_syntax_error('bestvideo+')
|
|
||||||
assert_syntax_error('/')
|
|
||||||
|
|
||||||
def test_format_filtering(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'A', 'filesize': 500, 'width': 1000},
|
|
||||||
{'format_id': 'B', 'filesize': 1000, 'width': 500},
|
|
||||||
{'format_id': 'C', 'filesize': 1000, 'width': 400},
|
|
||||||
{'format_id': 'D', 'filesize': 2000, 'width': 600},
|
|
||||||
{'format_id': 'E', 'filesize': 3000},
|
|
||||||
{'format_id': 'F'},
|
|
||||||
{'format_id': 'G', 'filesize': 1000000},
|
|
||||||
]
|
|
||||||
for f in formats:
|
|
||||||
f['url'] = 'http://_/'
|
|
||||||
f['ext'] = 'unknown'
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[filesize<3000]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
ydl.process_ie_result(info_dict)
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
downloaded = ydl.downloaded_info_dicts[0]
|
||||||
self.assertEqual(downloaded['format_id'], 'D')
|
self.assertEqual(downloaded['format_id'], u'35')
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[filesize<=3000]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'E')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[filesize <= ? 3000]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'F')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best [filesize = 1000] [width>450]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'B')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best [filesize = 1000] [width!=450]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'C')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[filesize>?1]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'G')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[filesize<1M]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'E')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[filesize<1MiB]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'G')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'all[width>=400][width<=600]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['B', 'C', 'D'])
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[height<40]'})
|
|
||||||
try:
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
except ExtractorError:
|
|
||||||
pass
|
|
||||||
self.assertEqual(ydl.downloaded_info_dicts, [])
|
|
||||||
|
|
||||||
def test_default_format_spec(self):
|
|
||||||
ydl = YDL({'simulate': True})
|
|
||||||
self.assertEqual(ydl._default_format_spec({}), 'bestvideo+bestaudio/best')
|
|
||||||
|
|
||||||
ydl = YDL({})
|
|
||||||
self.assertEqual(ydl._default_format_spec({'is_live': True}), 'best/bestvideo+bestaudio')
|
|
||||||
|
|
||||||
ydl = YDL({'simulate': True})
|
|
||||||
self.assertEqual(ydl._default_format_spec({'is_live': True}), 'bestvideo+bestaudio/best')
|
|
||||||
|
|
||||||
ydl = YDL({'outtmpl': '-'})
|
|
||||||
self.assertEqual(ydl._default_format_spec({}), 'best/bestvideo+bestaudio')
|
|
||||||
|
|
||||||
ydl = YDL({})
|
|
||||||
self.assertEqual(ydl._default_format_spec({}, download=False), 'bestvideo+bestaudio/best')
|
|
||||||
self.assertEqual(ydl._default_format_spec({'is_live': True}), 'best/bestvideo+bestaudio')
|
|
||||||
|
|
||||||
|
|
||||||
class TestYoutubeDL(unittest.TestCase):
|
|
||||||
def test_subtitles(self):
|
|
||||||
def s_formats(lang, autocaption=False):
|
|
||||||
return [{
|
|
||||||
'ext': ext,
|
|
||||||
'url': 'http://localhost/video.%s.%s' % (lang, ext),
|
|
||||||
'_auto': autocaption,
|
|
||||||
} for ext in ['vtt', 'srt', 'ass']]
|
|
||||||
subtitles = dict((l, s_formats(l)) for l in ['en', 'fr', 'es'])
|
|
||||||
auto_captions = dict((l, s_formats(l, True)) for l in ['it', 'pt', 'es'])
|
|
||||||
info_dict = {
|
|
||||||
'id': 'test',
|
|
||||||
'title': 'Test',
|
|
||||||
'url': 'http://localhost/video.mp4',
|
|
||||||
'subtitles': subtitles,
|
|
||||||
'automatic_captions': auto_captions,
|
|
||||||
'extractor': 'TEST',
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_info(params={}):
|
|
||||||
params.setdefault('simulate', True)
|
|
||||||
ydl = YDL(params)
|
|
||||||
ydl.report_warning = lambda *args, **kargs: None
|
|
||||||
return ydl.process_video_result(info_dict, download=False)
|
|
||||||
|
|
||||||
result = get_info()
|
|
||||||
self.assertFalse(result.get('requested_subtitles'))
|
|
||||||
self.assertEqual(result['subtitles'], subtitles)
|
|
||||||
self.assertEqual(result['automatic_captions'], auto_captions)
|
|
||||||
|
|
||||||
result = get_info({'writesubtitles': True})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertTrue(subs)
|
|
||||||
self.assertEqual(set(subs.keys()), set(['en']))
|
|
||||||
self.assertTrue(subs['en'].get('data') is None)
|
|
||||||
self.assertEqual(subs['en']['ext'], 'ass')
|
|
||||||
|
|
||||||
result = get_info({'writesubtitles': True, 'subtitlesformat': 'foo/srt'})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertEqual(subs['en']['ext'], 'srt')
|
|
||||||
|
|
||||||
result = get_info({'writesubtitles': True, 'subtitleslangs': ['es', 'fr', 'it']})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertTrue(subs)
|
|
||||||
self.assertEqual(set(subs.keys()), set(['es', 'fr']))
|
|
||||||
|
|
||||||
result = get_info({'writesubtitles': True, 'writeautomaticsub': True, 'subtitleslangs': ['es', 'pt']})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertTrue(subs)
|
|
||||||
self.assertEqual(set(subs.keys()), set(['es', 'pt']))
|
|
||||||
self.assertFalse(subs['es']['_auto'])
|
|
||||||
self.assertTrue(subs['pt']['_auto'])
|
|
||||||
|
|
||||||
result = get_info({'writeautomaticsub': True, 'subtitleslangs': ['es', 'pt']})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertTrue(subs)
|
|
||||||
self.assertEqual(set(subs.keys()), set(['es', 'pt']))
|
|
||||||
self.assertTrue(subs['es']['_auto'])
|
|
||||||
self.assertTrue(subs['pt']['_auto'])
|
|
||||||
|
|
||||||
def test_add_extra_info(self):
|
def test_add_extra_info(self):
|
||||||
test_dict = {
|
test_dict = {
|
||||||
@ -622,303 +140,6 @@ class TestYoutubeDL(unittest.TestCase):
|
|||||||
self.assertEqual(test_dict['extractor'], 'Foo')
|
self.assertEqual(test_dict['extractor'], 'Foo')
|
||||||
self.assertEqual(test_dict['playlist'], 'funny videos')
|
self.assertEqual(test_dict['playlist'], 'funny videos')
|
||||||
|
|
||||||
def test_prepare_filename(self):
|
|
||||||
info = {
|
|
||||||
'id': '1234',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'width': None,
|
|
||||||
'height': 1080,
|
|
||||||
'title1': '$PATH',
|
|
||||||
'title2': '%PATH%',
|
|
||||||
}
|
|
||||||
|
|
||||||
def fname(templ):
|
|
||||||
ydl = YoutubeDL({'outtmpl': templ})
|
|
||||||
return ydl.prepare_filename(info)
|
|
||||||
self.assertEqual(fname('%(id)s.%(ext)s'), '1234.mp4')
|
|
||||||
self.assertEqual(fname('%(id)s-%(width)s.%(ext)s'), '1234-NA.mp4')
|
|
||||||
# Replace missing fields with 'NA'
|
|
||||||
self.assertEqual(fname('%(uploader_date)s-%(id)s.%(ext)s'), 'NA-1234.mp4')
|
|
||||||
self.assertEqual(fname('%(height)d.%(ext)s'), '1080.mp4')
|
|
||||||
self.assertEqual(fname('%(height)6d.%(ext)s'), ' 1080.mp4')
|
|
||||||
self.assertEqual(fname('%(height)-6d.%(ext)s'), '1080 .mp4')
|
|
||||||
self.assertEqual(fname('%(height)06d.%(ext)s'), '001080.mp4')
|
|
||||||
self.assertEqual(fname('%(height) 06d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%(height) 06d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%(height)0 6d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%(height)0 6d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%(height) 0 6d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%%'), '%')
|
|
||||||
self.assertEqual(fname('%%%%'), '%%')
|
|
||||||
self.assertEqual(fname('%%(height)06d.%(ext)s'), '%(height)06d.mp4')
|
|
||||||
self.assertEqual(fname('%(width)06d.%(ext)s'), 'NA.mp4')
|
|
||||||
self.assertEqual(fname('%(width)06d.%%(ext)s'), 'NA.%(ext)s')
|
|
||||||
self.assertEqual(fname('%%(width)06d.%(ext)s'), '%(width)06d.mp4')
|
|
||||||
self.assertEqual(fname('Hello %(title1)s'), 'Hello $PATH')
|
|
||||||
self.assertEqual(fname('Hello %(title2)s'), 'Hello %PATH%')
|
|
||||||
|
|
||||||
def test_format_note(self):
|
|
||||||
ydl = YoutubeDL()
|
|
||||||
self.assertEqual(ydl._format_note({}), '')
|
|
||||||
assertRegexpMatches(self, ydl._format_note({
|
|
||||||
'vbr': 10,
|
|
||||||
}), r'^\s*10k$')
|
|
||||||
assertRegexpMatches(self, ydl._format_note({
|
|
||||||
'fps': 30,
|
|
||||||
}), r'^30fps$')
|
|
||||||
|
|
||||||
def test_postprocessors(self):
|
|
||||||
filename = 'post-processor-testfile.mp4'
|
|
||||||
audiofile = filename + '.mp3'
|
|
||||||
|
|
||||||
class SimplePP(PostProcessor):
|
|
||||||
def run(self, info):
|
|
||||||
with open(audiofile, 'wt') as f:
|
|
||||||
f.write('EXAMPLE')
|
|
||||||
return [info['filepath']], info
|
|
||||||
|
|
||||||
def run_pp(params, PP):
|
|
||||||
with open(filename, 'wt') as f:
|
|
||||||
f.write('EXAMPLE')
|
|
||||||
ydl = YoutubeDL(params)
|
|
||||||
ydl.add_post_processor(PP())
|
|
||||||
ydl.post_process(filename, {'filepath': filename})
|
|
||||||
|
|
||||||
run_pp({'keepvideo': True}, SimplePP)
|
|
||||||
self.assertTrue(os.path.exists(filename), '%s doesn\'t exist' % filename)
|
|
||||||
self.assertTrue(os.path.exists(audiofile), '%s doesn\'t exist' % audiofile)
|
|
||||||
os.unlink(filename)
|
|
||||||
os.unlink(audiofile)
|
|
||||||
|
|
||||||
run_pp({'keepvideo': False}, SimplePP)
|
|
||||||
self.assertFalse(os.path.exists(filename), '%s exists' % filename)
|
|
||||||
self.assertTrue(os.path.exists(audiofile), '%s doesn\'t exist' % audiofile)
|
|
||||||
os.unlink(audiofile)
|
|
||||||
|
|
||||||
class ModifierPP(PostProcessor):
|
|
||||||
def run(self, info):
|
|
||||||
with open(info['filepath'], 'wt') as f:
|
|
||||||
f.write('MODIFIED')
|
|
||||||
return [], info
|
|
||||||
|
|
||||||
run_pp({'keepvideo': False}, ModifierPP)
|
|
||||||
self.assertTrue(os.path.exists(filename), '%s doesn\'t exist' % filename)
|
|
||||||
os.unlink(filename)
|
|
||||||
|
|
||||||
def test_match_filter(self):
|
|
||||||
class FilterYDL(YDL):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super(FilterYDL, self).__init__(*args, **kwargs)
|
|
||||||
self.params['simulate'] = True
|
|
||||||
|
|
||||||
def process_info(self, info_dict):
|
|
||||||
super(YDL, self).process_info(info_dict)
|
|
||||||
|
|
||||||
def _match_entry(self, info_dict, incomplete):
|
|
||||||
res = super(FilterYDL, self)._match_entry(info_dict, incomplete)
|
|
||||||
if res is None:
|
|
||||||
self.downloaded_info_dicts.append(info_dict)
|
|
||||||
return res
|
|
||||||
|
|
||||||
first = {
|
|
||||||
'id': '1',
|
|
||||||
'url': TEST_URL,
|
|
||||||
'title': 'one',
|
|
||||||
'extractor': 'TEST',
|
|
||||||
'duration': 30,
|
|
||||||
'filesize': 10 * 1024,
|
|
||||||
'playlist_id': '42',
|
|
||||||
'uploader': "變態妍字幕版 太妍 тест",
|
|
||||||
'creator': "тест ' 123 ' тест--",
|
|
||||||
}
|
|
||||||
second = {
|
|
||||||
'id': '2',
|
|
||||||
'url': TEST_URL,
|
|
||||||
'title': 'two',
|
|
||||||
'extractor': 'TEST',
|
|
||||||
'duration': 10,
|
|
||||||
'description': 'foo',
|
|
||||||
'filesize': 5 * 1024,
|
|
||||||
'playlist_id': '43',
|
|
||||||
'uploader': "тест 123",
|
|
||||||
}
|
|
||||||
videos = [first, second]
|
|
||||||
|
|
||||||
def get_videos(filter_=None):
|
|
||||||
ydl = FilterYDL({'match_filter': filter_})
|
|
||||||
for v in videos:
|
|
||||||
ydl.process_ie_result(v, download=True)
|
|
||||||
return [v['id'] for v in ydl.downloaded_info_dicts]
|
|
||||||
|
|
||||||
res = get_videos()
|
|
||||||
self.assertEqual(res, ['1', '2'])
|
|
||||||
|
|
||||||
def f(v):
|
|
||||||
if v['id'] == '1':
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
return 'Video id is not 1'
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func('duration < 30')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['2'])
|
|
||||||
|
|
||||||
f = match_filter_func('description = foo')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['2'])
|
|
||||||
|
|
||||||
f = match_filter_func('description =? foo')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1', '2'])
|
|
||||||
|
|
||||||
f = match_filter_func('filesize > 5KiB')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func('playlist_id = 42')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func('uploader = "變態妍字幕版 太妍 тест"')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func('uploader != "變態妍字幕版 太妍 тест"')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['2'])
|
|
||||||
|
|
||||||
f = match_filter_func('creator = "тест \' 123 \' тест--"')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func("creator = 'тест \\' 123 \\' тест--'")
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func(r"creator = 'тест \' 123 \' тест--' & duration > 30")
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, [])
|
|
||||||
|
|
||||||
def test_playlist_items_selection(self):
|
|
||||||
entries = [{
|
|
||||||
'id': compat_str(i),
|
|
||||||
'title': compat_str(i),
|
|
||||||
'url': TEST_URL,
|
|
||||||
} for i in range(1, 5)]
|
|
||||||
playlist = {
|
|
||||||
'_type': 'playlist',
|
|
||||||
'id': 'test',
|
|
||||||
'entries': entries,
|
|
||||||
'extractor': 'test:playlist',
|
|
||||||
'extractor_key': 'test:playlist',
|
|
||||||
'webpage_url': 'http://example.com',
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_downloaded_info_dicts(params):
|
|
||||||
ydl = YDL(params)
|
|
||||||
# make a deep copy because the dictionary and nested entries
|
|
||||||
# can be modified
|
|
||||||
ydl.process_ie_result(copy.deepcopy(playlist))
|
|
||||||
return ydl.downloaded_info_dicts
|
|
||||||
|
|
||||||
def get_ids(params):
|
|
||||||
return [int(v['id']) for v in get_downloaded_info_dicts(params)]
|
|
||||||
|
|
||||||
result = get_ids({})
|
|
||||||
self.assertEqual(result, [1, 2, 3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlistend': 10})
|
|
||||||
self.assertEqual(result, [1, 2, 3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlistend': 2})
|
|
||||||
self.assertEqual(result, [1, 2])
|
|
||||||
|
|
||||||
result = get_ids({'playliststart': 10})
|
|
||||||
self.assertEqual(result, [])
|
|
||||||
|
|
||||||
result = get_ids({'playliststart': 2})
|
|
||||||
self.assertEqual(result, [2, 3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '2-4'})
|
|
||||||
self.assertEqual(result, [2, 3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '2,4'})
|
|
||||||
self.assertEqual(result, [2, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '10'})
|
|
||||||
self.assertEqual(result, [])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '3-10'})
|
|
||||||
self.assertEqual(result, [3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '2-4,3-4,3'})
|
|
||||||
self.assertEqual(result, [2, 3, 4])
|
|
||||||
|
|
||||||
# Tests for https://github.com/ytdl-org/youtube-dl/issues/10591
|
|
||||||
# @{
|
|
||||||
result = get_downloaded_info_dicts({'playlist_items': '2-4,3-4,3'})
|
|
||||||
self.assertEqual(result[0]['playlist_index'], 2)
|
|
||||||
self.assertEqual(result[1]['playlist_index'], 3)
|
|
||||||
|
|
||||||
result = get_downloaded_info_dicts({'playlist_items': '2-4,3-4,3'})
|
|
||||||
self.assertEqual(result[0]['playlist_index'], 2)
|
|
||||||
self.assertEqual(result[1]['playlist_index'], 3)
|
|
||||||
self.assertEqual(result[2]['playlist_index'], 4)
|
|
||||||
|
|
||||||
result = get_downloaded_info_dicts({'playlist_items': '4,2'})
|
|
||||||
self.assertEqual(result[0]['playlist_index'], 4)
|
|
||||||
self.assertEqual(result[1]['playlist_index'], 2)
|
|
||||||
# @}
|
|
||||||
|
|
||||||
def test_urlopen_no_file_protocol(self):
|
|
||||||
# see https://github.com/ytdl-org/youtube-dl/issues/8227
|
|
||||||
ydl = YDL()
|
|
||||||
self.assertRaises(compat_urllib_error.URLError, ydl.urlopen, 'file:///etc/passwd')
|
|
||||||
|
|
||||||
def test_do_not_override_ie_key_in_url_transparent(self):
|
|
||||||
ydl = YDL()
|
|
||||||
|
|
||||||
class Foo1IE(InfoExtractor):
|
|
||||||
_VALID_URL = r'foo1:'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
return {
|
|
||||||
'_type': 'url_transparent',
|
|
||||||
'url': 'foo2:',
|
|
||||||
'ie_key': 'Foo2',
|
|
||||||
'title': 'foo1 title',
|
|
||||||
'id': 'foo1_id',
|
|
||||||
}
|
|
||||||
|
|
||||||
class Foo2IE(InfoExtractor):
|
|
||||||
_VALID_URL = r'foo2:'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
return {
|
|
||||||
'_type': 'url',
|
|
||||||
'url': 'foo3:',
|
|
||||||
'ie_key': 'Foo3',
|
|
||||||
}
|
|
||||||
|
|
||||||
class Foo3IE(InfoExtractor):
|
|
||||||
_VALID_URL = r'foo3:'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
return _make_result([{'url': TEST_URL}], title='foo3 title')
|
|
||||||
|
|
||||||
ydl.add_info_extractor(Foo1IE(ydl))
|
|
||||||
ydl.add_info_extractor(Foo2IE(ydl))
|
|
||||||
ydl.add_info_extractor(Foo3IE(ydl))
|
|
||||||
ydl.extract_info('foo1:')
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['url'], TEST_URL)
|
|
||||||
self.assertEqual(downloaded['title'], 'foo1 title')
|
|
||||||
self.assertEqual(downloaded['id'], 'testid')
|
|
||||||
self.assertEqual(downloaded['extractor'], 'testex')
|
|
||||||
self.assertEqual(downloaded['extractor_key'], 'TestEx')
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
@ -1,51 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import tempfile
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.utils import YoutubeDLCookieJar
|
|
||||||
|
|
||||||
|
|
||||||
class TestYoutubeDLCookieJar(unittest.TestCase):
|
|
||||||
def test_keep_session_cookies(self):
|
|
||||||
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/session_cookies.txt')
|
|
||||||
cookiejar.load(ignore_discard=True, ignore_expires=True)
|
|
||||||
tf = tempfile.NamedTemporaryFile(delete=False)
|
|
||||||
try:
|
|
||||||
cookiejar.save(filename=tf.name, ignore_discard=True, ignore_expires=True)
|
|
||||||
temp = tf.read().decode('utf-8')
|
|
||||||
self.assertTrue(re.search(
|
|
||||||
r'www\.foobar\.foobar\s+FALSE\s+/\s+TRUE\s+0\s+YoutubeDLExpiresEmpty\s+YoutubeDLExpiresEmptyValue', temp))
|
|
||||||
self.assertTrue(re.search(
|
|
||||||
r'www\.foobar\.foobar\s+FALSE\s+/\s+TRUE\s+0\s+YoutubeDLExpires0\s+YoutubeDLExpires0Value', temp))
|
|
||||||
finally:
|
|
||||||
tf.close()
|
|
||||||
os.remove(tf.name)
|
|
||||||
|
|
||||||
def test_strip_httponly_prefix(self):
|
|
||||||
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt')
|
|
||||||
cookiejar.load(ignore_discard=True, ignore_expires=True)
|
|
||||||
|
|
||||||
def assert_cookie_has_value(key):
|
|
||||||
self.assertEqual(cookiejar._cookies['www.foobar.foobar']['/'][key].value, key + '_VALUE')
|
|
||||||
|
|
||||||
assert_cookie_has_value('HTTPONLY_COOKIE')
|
|
||||||
assert_cookie_has_value('JS_ACCESSIBLE_COOKIE')
|
|
||||||
|
|
||||||
def test_malformed_cookies(self):
|
|
||||||
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/malformed_cookies.txt')
|
|
||||||
cookiejar.load(ignore_discard=True, ignore_expires=True)
|
|
||||||
# Cookies should be empty since all malformed cookie file entries
|
|
||||||
# will be ignored
|
|
||||||
self.assertFalse(cookiejar._cookies)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,63 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.aes import aes_decrypt, aes_encrypt, aes_cbc_decrypt, aes_cbc_encrypt, aes_decrypt_text
|
|
||||||
from youtube_dl.utils import bytes_to_intlist, intlist_to_bytes
|
|
||||||
import base64
|
|
||||||
|
|
||||||
# the encrypted data can be generate with 'devscripts/generate_aes_testdata.py'
|
|
||||||
|
|
||||||
|
|
||||||
class TestAES(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.key = self.iv = [0x20, 0x15] + 14 * [0]
|
|
||||||
self.secret_msg = b'Secret message goes here'
|
|
||||||
|
|
||||||
def test_encrypt(self):
|
|
||||||
msg = b'message'
|
|
||||||
key = list(range(16))
|
|
||||||
encrypted = aes_encrypt(bytes_to_intlist(msg), key)
|
|
||||||
decrypted = intlist_to_bytes(aes_decrypt(encrypted, key))
|
|
||||||
self.assertEqual(decrypted, msg)
|
|
||||||
|
|
||||||
def test_cbc_decrypt(self):
|
|
||||||
data = bytes_to_intlist(
|
|
||||||
b"\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd"
|
|
||||||
)
|
|
||||||
decrypted = intlist_to_bytes(aes_cbc_decrypt(data, self.key, self.iv))
|
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
|
||||||
|
|
||||||
def test_cbc_encrypt(self):
|
|
||||||
data = bytes_to_intlist(self.secret_msg)
|
|
||||||
encrypted = intlist_to_bytes(aes_cbc_encrypt(data, self.key, self.iv))
|
|
||||||
self.assertEqual(
|
|
||||||
encrypted,
|
|
||||||
b"\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd")
|
|
||||||
|
|
||||||
def test_decrypt_text(self):
|
|
||||||
password = intlist_to_bytes(self.key).decode('utf-8')
|
|
||||||
encrypted = base64.b64encode(
|
|
||||||
intlist_to_bytes(self.iv[:8])
|
|
||||||
+ b'\x17\x15\x93\xab\x8d\x80V\xcdV\xe0\t\xcdo\xc2\xa5\xd8ksM\r\xe27N\xae'
|
|
||||||
).decode('utf-8')
|
|
||||||
decrypted = (aes_decrypt_text(encrypted, password, 16))
|
|
||||||
self.assertEqual(decrypted, self.secret_msg)
|
|
||||||
|
|
||||||
password = intlist_to_bytes(self.key).decode('utf-8')
|
|
||||||
encrypted = base64.b64encode(
|
|
||||||
intlist_to_bytes(self.iv[:8])
|
|
||||||
+ b'\x0b\xe6\xa4\xd9z\x0e\xb8\xb9\xd0\xd4i_\x85\x1d\x99\x98_\xe5\x80\xe7.\xbf\xa5\x83'
|
|
||||||
).decode('utf-8')
|
|
||||||
decrypted = (aes_decrypt_text(encrypted, password, 32))
|
|
||||||
self.assertEqual(decrypted, self.secret_msg)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,5 +1,4 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
# Allow direct execution
|
||||||
import os
|
import os
|
||||||
@ -14,13 +13,13 @@ from youtube_dl import YoutubeDL
|
|||||||
|
|
||||||
|
|
||||||
def _download_restricted(url, filename, age):
|
def _download_restricted(url, filename, age):
|
||||||
""" Returns true if the file has been downloaded """
|
""" Returns true iff the file has been downloaded """
|
||||||
|
|
||||||
params = {
|
params = {
|
||||||
'age_limit': age,
|
'age_limit': age,
|
||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
'writeinfojson': True,
|
'writeinfojson': True,
|
||||||
'outtmpl': '%(id)s.%(ext)s',
|
"outtmpl": "%(id)s.%(ext)s",
|
||||||
}
|
}
|
||||||
ydl = YoutubeDL(params)
|
ydl = YoutubeDL(params)
|
||||||
ydl.add_default_info_extractors()
|
ydl.add_default_info_extractors()
|
||||||
@ -45,6 +44,11 @@ class TestAgeRestriction(unittest.TestCase):
|
|||||||
'http://www.youporn.com/watch/505835/sex-ed-is-it-safe-to-masturbate-daily/',
|
'http://www.youporn.com/watch/505835/sex-ed-is-it-safe-to-masturbate-daily/',
|
||||||
'505835.mp4', 2, old_age=25)
|
'505835.mp4', 2, old_age=25)
|
||||||
|
|
||||||
|
def test_pornotube(self):
|
||||||
|
self._assert_restricted(
|
||||||
|
'http://pornotube.com/c/173/m/1689755/Marilyn-Monroe-Bathing',
|
||||||
|
'1689755.flv', 13)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
@ -1,20 +1,17 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
# Allow direct execution
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
import unittest
|
import unittest
|
||||||
import collections
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
|
||||||
from test.helper import gettestcases
|
from test.helper import get_testcases
|
||||||
|
|
||||||
from youtube_dl.extractor import (
|
from youtube_dl.extractor import (
|
||||||
FacebookIE,
|
|
||||||
gen_extractors,
|
gen_extractors,
|
||||||
|
JustinTVIE,
|
||||||
YoutubeIE,
|
YoutubeIE,
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -31,24 +28,21 @@ class TestAllURLsMatching(unittest.TestCase):
|
|||||||
|
|
||||||
def test_youtube_playlist_matching(self):
|
def test_youtube_playlist_matching(self):
|
||||||
assertPlaylist = lambda url: self.assertMatch(url, ['youtube:playlist'])
|
assertPlaylist = lambda url: self.assertMatch(url, ['youtube:playlist'])
|
||||||
assertPlaylist('ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
|
assertPlaylist(u'ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
|
||||||
assertPlaylist('UUBABnxM4Ar9ten8Mdjj1j0Q') # 585
|
assertPlaylist(u'UUBABnxM4Ar9ten8Mdjj1j0Q') #585
|
||||||
assertPlaylist('PL63F0C78739B09958')
|
assertPlaylist(u'PL63F0C78739B09958')
|
||||||
assertPlaylist('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q')
|
assertPlaylist(u'https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q')
|
||||||
assertPlaylist('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
|
assertPlaylist(u'https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
|
||||||
assertPlaylist('https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC')
|
assertPlaylist(u'https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC')
|
||||||
assertPlaylist('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') # 668
|
assertPlaylist(u'https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') #668
|
||||||
self.assertFalse('youtube:playlist' in self.matching_ies('PLtS2H6bU1M'))
|
self.assertFalse('youtube:playlist' in self.matching_ies(u'PLtS2H6bU1M'))
|
||||||
# Top tracks
|
|
||||||
assertPlaylist('https://www.youtube.com/playlist?list=MCUS.20142101')
|
|
||||||
|
|
||||||
def test_youtube_matching(self):
|
def test_youtube_matching(self):
|
||||||
self.assertTrue(YoutubeIE.suitable('PLtS2H6bU1M'))
|
self.assertTrue(YoutubeIE.suitable(u'PLtS2H6bU1M'))
|
||||||
self.assertFalse(YoutubeIE.suitable('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012')) # 668
|
self.assertFalse(YoutubeIE.suitable(u'https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012')) #668
|
||||||
self.assertMatch('http://youtu.be/BaW_jenozKc', ['youtube'])
|
self.assertMatch('http://youtu.be/BaW_jenozKc', ['youtube'])
|
||||||
self.assertMatch('http://www.youtube.com/v/BaW_jenozKc', ['youtube'])
|
self.assertMatch('http://www.youtube.com/v/BaW_jenozKc', ['youtube'])
|
||||||
self.assertMatch('https://youtube.googleapis.com/v/BaW_jenozKc', ['youtube'])
|
self.assertMatch('https://youtube.googleapis.com/v/BaW_jenozKc', ['youtube'])
|
||||||
self.assertMatch('http://www.cleanvideosearch.com/media/action/yt/watch?videoId=8v_4O44sfjM', ['youtube'])
|
|
||||||
|
|
||||||
def test_youtube_channel_matching(self):
|
def test_youtube_channel_matching(self):
|
||||||
assertChannel = lambda url: self.assertMatch(url, ['youtube:channel'])
|
assertChannel = lambda url: self.assertMatch(url, ['youtube:channel'])
|
||||||
@ -57,10 +51,10 @@ class TestAllURLsMatching(unittest.TestCase):
|
|||||||
assertChannel('https://www.youtube.com/channel/HCtnHdj3df7iM/videos')
|
assertChannel('https://www.youtube.com/channel/HCtnHdj3df7iM/videos')
|
||||||
|
|
||||||
def test_youtube_user_matching(self):
|
def test_youtube_user_matching(self):
|
||||||
self.assertMatch('http://www.youtube.com/NASAgovVideo/videos', ['youtube:user'])
|
self.assertMatch('www.youtube.com/NASAgovVideo/videos', ['youtube:user'])
|
||||||
|
|
||||||
def test_youtube_feeds(self):
|
def test_youtube_feeds(self):
|
||||||
self.assertMatch('https://www.youtube.com/feed/watch_later', ['youtube:watchlater'])
|
self.assertMatch('https://www.youtube.com/feed/watch_later', ['youtube:watch_later'])
|
||||||
self.assertMatch('https://www.youtube.com/feed/subscriptions', ['youtube:subscriptions'])
|
self.assertMatch('https://www.youtube.com/feed/subscriptions', ['youtube:subscriptions'])
|
||||||
self.assertMatch('https://www.youtube.com/feed/recommended', ['youtube:recommended'])
|
self.assertMatch('https://www.youtube.com/feed/recommended', ['youtube:recommended'])
|
||||||
self.assertMatch('https://www.youtube.com/my_favorites', ['youtube:favorites'])
|
self.assertMatch('https://www.youtube.com/my_favorites', ['youtube:favorites'])
|
||||||
@ -68,12 +62,24 @@ class TestAllURLsMatching(unittest.TestCase):
|
|||||||
def test_youtube_show_matching(self):
|
def test_youtube_show_matching(self):
|
||||||
self.assertMatch('http://www.youtube.com/show/airdisasters', ['youtube:show'])
|
self.assertMatch('http://www.youtube.com/show/airdisasters', ['youtube:show'])
|
||||||
|
|
||||||
def test_youtube_search_matching(self):
|
def test_justin_tv_channelid_matching(self):
|
||||||
self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url'])
|
self.assertTrue(JustinTVIE.suitable(u"justin.tv/vanillatv"))
|
||||||
self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url'])
|
self.assertTrue(JustinTVIE.suitable(u"twitch.tv/vanillatv"))
|
||||||
|
self.assertTrue(JustinTVIE.suitable(u"www.justin.tv/vanillatv"))
|
||||||
|
self.assertTrue(JustinTVIE.suitable(u"www.twitch.tv/vanillatv"))
|
||||||
|
self.assertTrue(JustinTVIE.suitable(u"http://www.justin.tv/vanillatv"))
|
||||||
|
self.assertTrue(JustinTVIE.suitable(u"http://www.twitch.tv/vanillatv"))
|
||||||
|
self.assertTrue(JustinTVIE.suitable(u"http://www.justin.tv/vanillatv/"))
|
||||||
|
self.assertTrue(JustinTVIE.suitable(u"http://www.twitch.tv/vanillatv/"))
|
||||||
|
|
||||||
|
def test_justintv_videoid_matching(self):
|
||||||
|
self.assertTrue(JustinTVIE.suitable(u"http://www.twitch.tv/vanillatv/b/328087483"))
|
||||||
|
|
||||||
|
def test_justin_tv_chapterid_matching(self):
|
||||||
|
self.assertTrue(JustinTVIE.suitable(u"http://www.twitch.tv/tsm_theoddone/c/2349361"))
|
||||||
|
|
||||||
def test_youtube_extract(self):
|
def test_youtube_extract(self):
|
||||||
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)
|
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE()._extract_id(url), id)
|
||||||
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
||||||
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
||||||
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')
|
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')
|
||||||
@ -81,56 +87,24 @@ class TestAllURLsMatching(unittest.TestCase):
|
|||||||
assertExtractId('http://www.youtube.com/watch?v=BaW_jenozKcsharePLED17F32AD9753930', 'BaW_jenozKc')
|
assertExtractId('http://www.youtube.com/watch?v=BaW_jenozKcsharePLED17F32AD9753930', 'BaW_jenozKc')
|
||||||
assertExtractId('BaW_jenozKc', 'BaW_jenozKc')
|
assertExtractId('BaW_jenozKc', 'BaW_jenozKc')
|
||||||
|
|
||||||
def test_facebook_matching(self):
|
|
||||||
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/Shiniknoh#!/photo.php?v=10153317450565268'))
|
|
||||||
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/cindyweather?fref=ts#!/photo.php?v=10152183998945793'))
|
|
||||||
|
|
||||||
def test_no_duplicates(self):
|
def test_no_duplicates(self):
|
||||||
ies = gen_extractors()
|
ies = gen_extractors()
|
||||||
for tc in gettestcases(include_onlymatching=True):
|
for tc in get_testcases():
|
||||||
url = tc['url']
|
url = tc['url']
|
||||||
for ie in ies:
|
for ie in ies:
|
||||||
if type(ie).__name__ in ('GenericIE', tc['name'] + 'IE'):
|
if type(ie).__name__ in ['GenericIE', tc['name'] + 'IE']:
|
||||||
self.assertTrue(ie.suitable(url), '%s should match URL %r' % (type(ie).__name__, url))
|
self.assertTrue(ie.suitable(url), '%s should match URL %r' % (type(ie).__name__, url))
|
||||||
else:
|
else:
|
||||||
self.assertFalse(
|
self.assertFalse(ie.suitable(url), '%s should not match URL %r' % (type(ie).__name__, url))
|
||||||
ie.suitable(url),
|
|
||||||
'%s should not match URL %r . That URL belongs to %s.' % (type(ie).__name__, url, tc['name']))
|
|
||||||
|
|
||||||
def test_keywords(self):
|
def test_keywords(self):
|
||||||
self.assertMatch(':ytsubs', ['youtube:subscriptions'])
|
self.assertMatch(':ytsubs', ['youtube:subscriptions'])
|
||||||
self.assertMatch(':ytsubscriptions', ['youtube:subscriptions'])
|
self.assertMatch(':ytsubscriptions', ['youtube:subscriptions'])
|
||||||
self.assertMatch(':ythistory', ['youtube:history'])
|
self.assertMatch(':ythistory', ['youtube:history'])
|
||||||
|
self.assertMatch(':thedailyshow', ['ComedyCentralShows'])
|
||||||
def test_vimeo_matching(self):
|
self.assertMatch(':tds', ['ComedyCentralShows'])
|
||||||
self.assertMatch('https://vimeo.com/channels/tributes', ['vimeo:channel'])
|
self.assertMatch(':colbertreport', ['ComedyCentralShows'])
|
||||||
self.assertMatch('https://vimeo.com/channels/31259', ['vimeo:channel'])
|
self.assertMatch(':cr', ['ComedyCentralShows'])
|
||||||
self.assertMatch('https://vimeo.com/channels/31259/53576664', ['vimeo'])
|
|
||||||
self.assertMatch('https://vimeo.com/user7108434', ['vimeo:user'])
|
|
||||||
self.assertMatch('https://vimeo.com/user7108434/videos', ['vimeo:user'])
|
|
||||||
self.assertMatch('https://vimeo.com/user21297594/review/75524534/3c257a1b5d', ['vimeo:review'])
|
|
||||||
|
|
||||||
# https://github.com/ytdl-org/youtube-dl/issues/1930
|
|
||||||
def test_soundcloud_not_matching_sets(self):
|
|
||||||
self.assertMatch('http://soundcloud.com/floex/sets/gone-ep', ['soundcloud:set'])
|
|
||||||
|
|
||||||
def test_tumblr(self):
|
|
||||||
self.assertMatch('http://tatianamaslanydaily.tumblr.com/post/54196191430/orphan-black-dvd-extra-behind-the-scenes', ['Tumblr'])
|
|
||||||
self.assertMatch('http://tatianamaslanydaily.tumblr.com/post/54196191430', ['Tumblr'])
|
|
||||||
|
|
||||||
def test_pbs(self):
|
|
||||||
# https://github.com/ytdl-org/youtube-dl/issues/2350
|
|
||||||
self.assertMatch('http://video.pbs.org/viralplayer/2365173446/', ['pbs'])
|
|
||||||
self.assertMatch('http://video.pbs.org/widget/partnerplayer/980042464/', ['pbs'])
|
|
||||||
|
|
||||||
def test_no_duplicated_ie_names(self):
|
|
||||||
name_accu = collections.defaultdict(list)
|
|
||||||
for ie in self.ies:
|
|
||||||
name_accu[ie.IE_NAME.lower()].append(type(ie).__name__)
|
|
||||||
for (ie_name, ie_list) in name_accu.items():
|
|
||||||
self.assertEqual(
|
|
||||||
len(ie_list), 1,
|
|
||||||
'Multiple extractors with the same IE_NAME "%s" (%s)' % (ie_name, ', '.join(ie_list)))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
@ -1,59 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import shutil
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
|
|
||||||
from test.helper import FakeYDL
|
|
||||||
from youtube_dl.cache import Cache
|
|
||||||
|
|
||||||
|
|
||||||
def _is_empty(d):
|
|
||||||
return not bool(os.listdir(d))
|
|
||||||
|
|
||||||
|
|
||||||
def _mkdir(d):
|
|
||||||
if not os.path.exists(d):
|
|
||||||
os.mkdir(d)
|
|
||||||
|
|
||||||
|
|
||||||
class TestCache(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
TESTDATA_DIR = os.path.join(TEST_DIR, 'testdata')
|
|
||||||
_mkdir(TESTDATA_DIR)
|
|
||||||
self.test_dir = os.path.join(TESTDATA_DIR, 'cache_test')
|
|
||||||
self.tearDown()
|
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
if os.path.exists(self.test_dir):
|
|
||||||
shutil.rmtree(self.test_dir)
|
|
||||||
|
|
||||||
def test_cache(self):
|
|
||||||
ydl = FakeYDL({
|
|
||||||
'cachedir': self.test_dir,
|
|
||||||
})
|
|
||||||
c = Cache(ydl)
|
|
||||||
obj = {'x': 1, 'y': ['ä', '\\a', True]}
|
|
||||||
self.assertEqual(c.load('test_cache', 'k.'), None)
|
|
||||||
c.store('test_cache', 'k.', obj)
|
|
||||||
self.assertEqual(c.load('test_cache', 'k2'), None)
|
|
||||||
self.assertFalse(_is_empty(self.test_dir))
|
|
||||||
self.assertEqual(c.load('test_cache', 'k.'), obj)
|
|
||||||
self.assertEqual(c.load('test_cache', 'y'), None)
|
|
||||||
self.assertEqual(c.load('test_cache2', 'k.'), None)
|
|
||||||
c.remove()
|
|
||||||
self.assertFalse(os.path.exists(self.test_dir))
|
|
||||||
self.assertEqual(c.load('test_cache', 'k.'), None)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,126 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_getenv,
|
|
||||||
compat_setenv,
|
|
||||||
compat_etree_Element,
|
|
||||||
compat_etree_fromstring,
|
|
||||||
compat_expanduser,
|
|
||||||
compat_shlex_split,
|
|
||||||
compat_str,
|
|
||||||
compat_struct_unpack,
|
|
||||||
compat_urllib_parse_unquote,
|
|
||||||
compat_urllib_parse_unquote_plus,
|
|
||||||
compat_urllib_parse_urlencode,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestCompat(unittest.TestCase):
|
|
||||||
def test_compat_getenv(self):
|
|
||||||
test_str = 'тест'
|
|
||||||
compat_setenv('YOUTUBE_DL_COMPAT_GETENV', test_str)
|
|
||||||
self.assertEqual(compat_getenv('YOUTUBE_DL_COMPAT_GETENV'), test_str)
|
|
||||||
|
|
||||||
def test_compat_setenv(self):
|
|
||||||
test_var = 'YOUTUBE_DL_COMPAT_SETENV'
|
|
||||||
test_str = 'тест'
|
|
||||||
compat_setenv(test_var, test_str)
|
|
||||||
compat_getenv(test_var)
|
|
||||||
self.assertEqual(compat_getenv(test_var), test_str)
|
|
||||||
|
|
||||||
def test_compat_expanduser(self):
|
|
||||||
old_home = os.environ.get('HOME')
|
|
||||||
test_str = r'C:\Documents and Settings\тест\Application Data'
|
|
||||||
compat_setenv('HOME', test_str)
|
|
||||||
self.assertEqual(compat_expanduser('~'), test_str)
|
|
||||||
compat_setenv('HOME', old_home or '')
|
|
||||||
|
|
||||||
def test_all_present(self):
|
|
||||||
import youtube_dl.compat
|
|
||||||
all_names = youtube_dl.compat.__all__
|
|
||||||
present_names = set(filter(
|
|
||||||
lambda c: '_' in c and not c.startswith('_'),
|
|
||||||
dir(youtube_dl.compat))) - set(['unicode_literals'])
|
|
||||||
self.assertEqual(all_names, sorted(present_names))
|
|
||||||
|
|
||||||
def test_compat_urllib_parse_unquote(self):
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('abc%20def'), 'abc def')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%7e/abc+def'), '~/abc+def')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote(''), '')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%'), '%')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%%'), '%%')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%%%'), '%%%')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%2F'), '/')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%2f'), '/')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%E6%B4%A5%E6%B3%A2'), '津波')
|
|
||||||
self.assertEqual(
|
|
||||||
compat_urllib_parse_unquote('''<meta property="og:description" content="%E2%96%81%E2%96%82%E2%96%83%E2%96%84%25%E2%96%85%E2%96%86%E2%96%87%E2%96%88" />
|
|
||||||
%<a href="https://ar.wikipedia.org/wiki/%D8%AA%D8%B3%D9%88%D9%86%D8%A7%D9%85%D9%8A">%a'''),
|
|
||||||
'''<meta property="og:description" content="▁▂▃▄%▅▆▇█" />
|
|
||||||
%<a href="https://ar.wikipedia.org/wiki/تسونامي">%a''')
|
|
||||||
self.assertEqual(
|
|
||||||
compat_urllib_parse_unquote('''%28%5E%E2%97%A3_%E2%97%A2%5E%29%E3%81%A3%EF%B8%BB%E3%83%87%E2%95%90%E4%B8%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%86%B6%I%Break%25Things%'''),
|
|
||||||
'''(^◣_◢^)っ︻デ═一 ⇀ ⇀ ⇀ ⇀ ⇀ ↶%I%Break%Things%''')
|
|
||||||
|
|
||||||
def test_compat_urllib_parse_unquote_plus(self):
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote_plus('abc%20def'), 'abc def')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote_plus('%7e/abc+def'), '~/abc def')
|
|
||||||
|
|
||||||
def test_compat_urllib_parse_urlencode(self):
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode({'abc': 'def'}), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode({'abc': b'def'}), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode({b'abc': 'def'}), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode({b'abc': b'def'}), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode([('abc', 'def')]), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode([('abc', b'def')]), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode([(b'abc', 'def')]), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode([(b'abc', b'def')]), 'abc=def')
|
|
||||||
|
|
||||||
def test_compat_shlex_split(self):
|
|
||||||
self.assertEqual(compat_shlex_split('-option "one two"'), ['-option', 'one two'])
|
|
||||||
self.assertEqual(compat_shlex_split('-option "one\ntwo" \n -flag'), ['-option', 'one\ntwo', '-flag'])
|
|
||||||
self.assertEqual(compat_shlex_split('-val 中文'), ['-val', '中文'])
|
|
||||||
|
|
||||||
def test_compat_etree_Element(self):
|
|
||||||
try:
|
|
||||||
compat_etree_Element.items
|
|
||||||
except AttributeError:
|
|
||||||
self.fail('compat_etree_Element is not a type')
|
|
||||||
|
|
||||||
def test_compat_etree_fromstring(self):
|
|
||||||
xml = '''
|
|
||||||
<root foo="bar" spam="中文">
|
|
||||||
<normal>foo</normal>
|
|
||||||
<chinese>中文</chinese>
|
|
||||||
<foo><bar>spam</bar></foo>
|
|
||||||
</root>
|
|
||||||
'''
|
|
||||||
doc = compat_etree_fromstring(xml.encode('utf-8'))
|
|
||||||
self.assertTrue(isinstance(doc.attrib['foo'], compat_str))
|
|
||||||
self.assertTrue(isinstance(doc.attrib['spam'], compat_str))
|
|
||||||
self.assertTrue(isinstance(doc.find('normal').text, compat_str))
|
|
||||||
self.assertTrue(isinstance(doc.find('chinese').text, compat_str))
|
|
||||||
self.assertTrue(isinstance(doc.find('foo/bar').text, compat_str))
|
|
||||||
|
|
||||||
def test_compat_etree_fromstring_doctype(self):
|
|
||||||
xml = '''<?xml version="1.0"?>
|
|
||||||
<!DOCTYPE smil PUBLIC "-//W3C//DTD SMIL 2.0//EN" "http://www.w3.org/2001/SMIL20/SMIL20.dtd">
|
|
||||||
<smil xmlns="http://www.w3.org/2001/SMIL20/Language"></smil>'''
|
|
||||||
compat_etree_fromstring(xml)
|
|
||||||
|
|
||||||
def test_struct_unpack(self):
|
|
||||||
self.assertEqual(compat_struct_unpack('!B', b'\x00'), (0,))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,7 +1,5 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
# Allow direct execution
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
@ -9,13 +7,11 @@ import unittest
|
|||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
from test.helper import (
|
from test.helper import (
|
||||||
assertGreaterEqual,
|
|
||||||
expect_warnings,
|
|
||||||
get_params,
|
get_params,
|
||||||
gettestcases,
|
get_testcases,
|
||||||
expect_info_dict,
|
|
||||||
try_rm,
|
try_rm,
|
||||||
report_warning,
|
md5,
|
||||||
|
report_warning
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -25,123 +21,83 @@ import json
|
|||||||
import socket
|
import socket
|
||||||
|
|
||||||
import youtube_dl.YoutubeDL
|
import youtube_dl.YoutubeDL
|
||||||
from youtube_dl.compat import (
|
from youtube_dl.utils import (
|
||||||
compat_http_client,
|
compat_str,
|
||||||
compat_urllib_error,
|
compat_urllib_error,
|
||||||
compat_HTTPError,
|
compat_HTTPError,
|
||||||
)
|
|
||||||
from youtube_dl.utils import (
|
|
||||||
DownloadError,
|
DownloadError,
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
format_bytes,
|
|
||||||
UnavailableVideoError,
|
UnavailableVideoError,
|
||||||
)
|
)
|
||||||
from youtube_dl.extractor import get_info_extractor
|
from youtube_dl.extractor import get_info_extractor
|
||||||
|
|
||||||
RETRIES = 3
|
RETRIES = 3
|
||||||
|
|
||||||
|
|
||||||
class YoutubeDL(youtube_dl.YoutubeDL):
|
class YoutubeDL(youtube_dl.YoutubeDL):
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
self.to_stderr = self.to_screen
|
self.to_stderr = self.to_screen
|
||||||
self.processed_info_dicts = []
|
self.processed_info_dicts = []
|
||||||
super(YoutubeDL, self).__init__(*args, **kwargs)
|
super(YoutubeDL, self).__init__(*args, **kwargs)
|
||||||
|
|
||||||
def report_warning(self, message):
|
def report_warning(self, message):
|
||||||
# Don't accept warnings during tests
|
# Don't accept warnings during tests
|
||||||
raise ExtractorError(message)
|
raise ExtractorError(message)
|
||||||
|
|
||||||
def process_info(self, info_dict):
|
def process_info(self, info_dict):
|
||||||
self.processed_info_dicts.append(info_dict)
|
self.processed_info_dicts.append(info_dict)
|
||||||
return super(YoutubeDL, self).process_info(info_dict)
|
return super(YoutubeDL, self).process_info(info_dict)
|
||||||
|
|
||||||
|
|
||||||
def _file_md5(fn):
|
def _file_md5(fn):
|
||||||
with open(fn, 'rb') as f:
|
with open(fn, 'rb') as f:
|
||||||
return hashlib.md5(f.read()).hexdigest()
|
return hashlib.md5(f.read()).hexdigest()
|
||||||
|
|
||||||
|
defs = get_testcases()
|
||||||
defs = gettestcases()
|
|
||||||
|
|
||||||
|
|
||||||
class TestDownload(unittest.TestCase):
|
class TestDownload(unittest.TestCase):
|
||||||
# Parallel testing in nosetests. See
|
|
||||||
# http://nose.readthedocs.org/en/latest/doc_tests/test_multiprocess/multiprocess.html
|
|
||||||
_multiprocess_shared_ = True
|
|
||||||
|
|
||||||
maxDiff = None
|
maxDiff = None
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Identify each test with the `add_ie` attribute, if available."""
|
|
||||||
|
|
||||||
def strclass(cls):
|
|
||||||
"""From 2.7's unittest; 2.6 had _strclass so we can't import it."""
|
|
||||||
return '%s.%s' % (cls.__module__, cls.__name__)
|
|
||||||
|
|
||||||
add_ie = getattr(self, self._testMethodName).add_ie
|
|
||||||
return '%s (%s)%s:' % (self._testMethodName,
|
|
||||||
strclass(self.__class__),
|
|
||||||
' [%s]' % add_ie if add_ie else '')
|
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.defs = defs
|
self.defs = defs
|
||||||
|
|
||||||
# Dynamically generate tests
|
### Dynamically generate tests
|
||||||
|
def generator(test_case):
|
||||||
|
|
||||||
def generator(test_case, tname):
|
|
||||||
|
|
||||||
def test_template(self):
|
def test_template(self):
|
||||||
ie = youtube_dl.extractor.get_info_extractor(test_case['name'])()
|
ie = youtube_dl.extractor.get_info_extractor(test_case['name'])
|
||||||
other_ies = [get_info_extractor(ie_key)() for ie_key in test_case.get('add_ie', [])]
|
other_ies = [get_info_extractor(ie_key) for ie_key in test_case.get('add_ie', [])]
|
||||||
is_playlist = any(k.startswith('playlist') for k in test_case)
|
|
||||||
test_cases = test_case.get(
|
|
||||||
'playlist', [] if is_playlist else [test_case])
|
|
||||||
|
|
||||||
def print_skipping(reason):
|
def print_skipping(reason):
|
||||||
print('Skipping %s: %s' % (test_case['name'], reason))
|
print('Skipping %s: %s' % (test_case['name'], reason))
|
||||||
if not ie.working():
|
if not ie.working():
|
||||||
print_skipping('IE marked as not _WORKING')
|
print_skipping('IE marked as not _WORKING')
|
||||||
return
|
return
|
||||||
|
if 'playlist' not in test_case:
|
||||||
for tc in test_cases:
|
info_dict = test_case.get('info_dict', {})
|
||||||
info_dict = tc.get('info_dict', {})
|
if not test_case.get('file') and not (info_dict.get('id') and info_dict.get('ext')):
|
||||||
if not (info_dict.get('id') and info_dict.get('ext')):
|
print_skipping('The output file cannot be know, the "file" '
|
||||||
raise Exception('Test definition incorrect. The output file cannot be known. Are both \'id\' and \'ext\' keys present?')
|
'key is missing or the info_dict is incomplete')
|
||||||
|
return
|
||||||
if 'skip' in test_case:
|
if 'skip' in test_case:
|
||||||
print_skipping(test_case['skip'])
|
print_skipping(test_case['skip'])
|
||||||
return
|
return
|
||||||
for other_ie in other_ies:
|
for other_ie in other_ies:
|
||||||
if not other_ie.working():
|
if not other_ie.working():
|
||||||
print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key())
|
print_skipping(u'test depends on %sIE, marked as not WORKING' % other_ie.ie_key())
|
||||||
return
|
return
|
||||||
|
|
||||||
params = get_params(test_case.get('params', {}))
|
params = get_params(test_case.get('params', {}))
|
||||||
params['outtmpl'] = tname + '_' + params['outtmpl']
|
|
||||||
if is_playlist and 'playlist' not in test_case:
|
|
||||||
params.setdefault('extract_flat', 'in_playlist')
|
|
||||||
params.setdefault('skip_download', True)
|
|
||||||
|
|
||||||
ydl = YoutubeDL(params, auto_init=False)
|
ydl = YoutubeDL(params)
|
||||||
ydl.add_default_info_extractors()
|
ydl.add_default_info_extractors()
|
||||||
finished_hook_called = set()
|
finished_hook_called = set()
|
||||||
|
|
||||||
def _hook(status):
|
def _hook(status):
|
||||||
if status['status'] == 'finished':
|
if status['status'] == 'finished':
|
||||||
finished_hook_called.add(status['filename'])
|
finished_hook_called.add(status['filename'])
|
||||||
ydl.add_progress_hook(_hook)
|
ydl.fd.add_progress_hook(_hook)
|
||||||
expect_warnings(ydl, test_case.get('expected_warnings', []))
|
|
||||||
|
|
||||||
def get_tc_filename(tc):
|
def get_tc_filename(tc):
|
||||||
return ydl.prepare_filename(tc.get('info_dict', {}))
|
return tc.get('file') or ydl.prepare_filename(tc.get('info_dict', {}))
|
||||||
|
|
||||||
res_dict = None
|
test_cases = test_case.get('playlist', [test_case])
|
||||||
|
def try_rm_tcs_files():
|
||||||
def try_rm_tcs_files(tcs=None):
|
for tc in test_cases:
|
||||||
if tcs is None:
|
|
||||||
tcs = test_cases
|
|
||||||
for tc in tcs:
|
|
||||||
tc_filename = get_tc_filename(tc)
|
tc_filename = get_tc_filename(tc)
|
||||||
try_rm(tc_filename)
|
try_rm(tc_filename)
|
||||||
try_rm(tc_filename + '.part')
|
try_rm(tc_filename + '.part')
|
||||||
@ -151,19 +107,14 @@ def generator(test_case, tname):
|
|||||||
try_num = 1
|
try_num = 1
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
# We're not using .download here since that is just a shim
|
ydl.download([test_case['url']])
|
||||||
# for outside error handling, and returns the exit code
|
|
||||||
# instead of the result dict.
|
|
||||||
res_dict = ydl.extract_info(
|
|
||||||
test_case['url'],
|
|
||||||
force_generic_extractor=params.get('force_generic_extractor', False))
|
|
||||||
except (DownloadError, ExtractorError) as err:
|
except (DownloadError, ExtractorError) as err:
|
||||||
# Check if the exception is not a network related one
|
# Check if the exception is not a network related one
|
||||||
if not err.exc_info[0] in (compat_urllib_error.URLError, socket.timeout, UnavailableVideoError, compat_http_client.BadStatusLine) or (err.exc_info[0] == compat_HTTPError and err.exc_info[1].code == 503):
|
if not err.exc_info[0] in (compat_urllib_error.URLError, socket.timeout, UnavailableVideoError) or (err.exc_info[0] == compat_HTTPError and err.exc_info[1].code == 503):
|
||||||
raise
|
raise
|
||||||
|
|
||||||
if try_num == RETRIES:
|
if try_num == RETRIES:
|
||||||
report_warning('%s failed due to network errors, skipping...' % tname)
|
report_warning(u'Failed due to network errors, skipping...')
|
||||||
return
|
return
|
||||||
|
|
||||||
print('Retrying: {0} failed tries\n\n##########\n\n'.format(try_num))
|
print('Retrying: {0} failed tries\n\n##########\n\n'.format(try_num))
|
||||||
@ -172,91 +123,53 @@ def generator(test_case, tname):
|
|||||||
else:
|
else:
|
||||||
break
|
break
|
||||||
|
|
||||||
if is_playlist:
|
for tc in test_cases:
|
||||||
self.assertTrue(res_dict['_type'] in ['playlist', 'multi_video'])
|
|
||||||
self.assertTrue('entries' in res_dict)
|
|
||||||
expect_info_dict(self, res_dict, test_case.get('info_dict', {}))
|
|
||||||
|
|
||||||
if 'playlist_mincount' in test_case:
|
|
||||||
assertGreaterEqual(
|
|
||||||
self,
|
|
||||||
len(res_dict['entries']),
|
|
||||||
test_case['playlist_mincount'],
|
|
||||||
'Expected at least %d in playlist %s, but got only %d' % (
|
|
||||||
test_case['playlist_mincount'], test_case['url'],
|
|
||||||
len(res_dict['entries'])))
|
|
||||||
if 'playlist_count' in test_case:
|
|
||||||
self.assertEqual(
|
|
||||||
len(res_dict['entries']),
|
|
||||||
test_case['playlist_count'],
|
|
||||||
'Expected %d entries in playlist %s, but got %d.' % (
|
|
||||||
test_case['playlist_count'],
|
|
||||||
test_case['url'],
|
|
||||||
len(res_dict['entries']),
|
|
||||||
))
|
|
||||||
if 'playlist_duration_sum' in test_case:
|
|
||||||
got_duration = sum(e['duration'] for e in res_dict['entries'])
|
|
||||||
self.assertEqual(
|
|
||||||
test_case['playlist_duration_sum'], got_duration)
|
|
||||||
|
|
||||||
# Generalize both playlists and single videos to unified format for
|
|
||||||
# simplicity
|
|
||||||
if 'entries' not in res_dict:
|
|
||||||
res_dict['entries'] = [res_dict]
|
|
||||||
|
|
||||||
for tc_num, tc in enumerate(test_cases):
|
|
||||||
tc_res_dict = res_dict['entries'][tc_num]
|
|
||||||
# First, check test cases' data against extracted data alone
|
|
||||||
expect_info_dict(self, tc_res_dict, tc.get('info_dict', {}))
|
|
||||||
# Now, check downloaded file consistency
|
|
||||||
tc_filename = get_tc_filename(tc)
|
tc_filename = get_tc_filename(tc)
|
||||||
if not test_case.get('params', {}).get('skip_download', False):
|
if not test_case.get('params', {}).get('skip_download', False):
|
||||||
self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename)
|
self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename)
|
||||||
self.assertTrue(tc_filename in finished_hook_called)
|
self.assertTrue(tc_filename in finished_hook_called)
|
||||||
expected_minsize = tc.get('file_minsize', 10000)
|
|
||||||
if expected_minsize is not None:
|
|
||||||
if params.get('test'):
|
|
||||||
expected_minsize = max(expected_minsize, 10000)
|
|
||||||
got_fsize = os.path.getsize(tc_filename)
|
|
||||||
assertGreaterEqual(
|
|
||||||
self, got_fsize, expected_minsize,
|
|
||||||
'Expected %s to be at least %s, but it\'s only %s ' %
|
|
||||||
(tc_filename, format_bytes(expected_minsize),
|
|
||||||
format_bytes(got_fsize)))
|
|
||||||
if 'md5' in tc:
|
|
||||||
md5_for_file = _file_md5(tc_filename)
|
|
||||||
self.assertEqual(tc['md5'], md5_for_file)
|
|
||||||
# Finally, check test cases' data again but this time against
|
|
||||||
# extracted data from info JSON file written during processing
|
|
||||||
info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json'
|
info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json'
|
||||||
self.assertTrue(
|
self.assertTrue(os.path.exists(info_json_fn))
|
||||||
os.path.exists(info_json_fn),
|
if 'md5' in tc:
|
||||||
'Missing info file %s' % info_json_fn)
|
md5_for_file = _file_md5(tc_filename)
|
||||||
|
self.assertEqual(md5_for_file, tc['md5'])
|
||||||
with io.open(info_json_fn, encoding='utf-8') as infof:
|
with io.open(info_json_fn, encoding='utf-8') as infof:
|
||||||
info_dict = json.load(infof)
|
info_dict = json.load(infof)
|
||||||
expect_info_dict(self, info_dict, tc.get('info_dict', {}))
|
for (info_field, expected) in tc.get('info_dict', {}).items():
|
||||||
|
if isinstance(expected, compat_str) and expected.startswith('md5:'):
|
||||||
|
got = 'md5:' + md5(info_dict.get(info_field))
|
||||||
|
else:
|
||||||
|
got = info_dict.get(info_field)
|
||||||
|
self.assertEqual(expected, got,
|
||||||
|
u'invalid value for field %s, expected %r, got %r' % (info_field, expected, got))
|
||||||
|
|
||||||
|
# If checkable fields are missing from the test case, print the info_dict
|
||||||
|
test_info_dict = dict((key, value if not isinstance(value, compat_str) or len(value) < 250 else 'md5:' + md5(value))
|
||||||
|
for key, value in info_dict.items()
|
||||||
|
if value and key in ('title', 'description', 'uploader', 'upload_date', 'uploader_id', 'location'))
|
||||||
|
if not all(key in tc.get('info_dict', {}).keys() for key in test_info_dict.keys()):
|
||||||
|
sys.stderr.write(u'\n"info_dict": ' + json.dumps(test_info_dict, ensure_ascii=False, indent=2) + u'\n')
|
||||||
|
|
||||||
|
# Check for the presence of mandatory fields
|
||||||
|
for key in ('id', 'url', 'title', 'ext'):
|
||||||
|
self.assertTrue(key in info_dict.keys() and info_dict[key])
|
||||||
|
# Check for mandatory fields that are automatically set by YoutubeDL
|
||||||
|
for key in ['webpage_url', 'extractor', 'extractor_key']:
|
||||||
|
self.assertTrue(info_dict.get(key), u'Missing field: %s' % key)
|
||||||
finally:
|
finally:
|
||||||
try_rm_tcs_files()
|
try_rm_tcs_files()
|
||||||
if is_playlist and res_dict is not None and res_dict.get('entries'):
|
|
||||||
# Remove all other files that may have been extracted if the
|
|
||||||
# extractor returns full results even with extract_flat
|
|
||||||
res_tcs = [{'info_dict': e} for e in res_dict['entries']]
|
|
||||||
try_rm_tcs_files(res_tcs)
|
|
||||||
|
|
||||||
return test_template
|
return test_template
|
||||||
|
|
||||||
|
### And add them to TestDownload
|
||||||
# And add them to TestDownload
|
|
||||||
for n, test_case in enumerate(defs):
|
for n, test_case in enumerate(defs):
|
||||||
|
test_method = generator(test_case)
|
||||||
tname = 'test_' + str(test_case['name'])
|
tname = 'test_' + str(test_case['name'])
|
||||||
i = 1
|
i = 1
|
||||||
while hasattr(TestDownload, tname):
|
while hasattr(TestDownload, tname):
|
||||||
tname = 'test_%s_%d' % (test_case['name'], i)
|
tname = 'test_' + str(test_case['name']) + '_' + str(i)
|
||||||
i += 1
|
i += 1
|
||||||
test_method = generator(test_case, tname)
|
test_method.__name__ = tname
|
||||||
test_method.__name__ = str(tname)
|
|
||||||
ie_list = test_case.get('add_ie')
|
|
||||||
test_method.add_ie = ie_list and ','.join(ie_list)
|
|
||||||
setattr(TestDownload, test_method.__name__, test_method)
|
setattr(TestDownload, test_method.__name__, test_method)
|
||||||
del test_method
|
del test_method
|
||||||
|
|
||||||
|
@ -1,115 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from test.helper import http_server_port, try_rm
|
|
||||||
from youtube_dl import YoutubeDL
|
|
||||||
from youtube_dl.compat import compat_http_server
|
|
||||||
from youtube_dl.downloader.http import HttpFD
|
|
||||||
from youtube_dl.utils import encodeFilename
|
|
||||||
import threading
|
|
||||||
|
|
||||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
|
|
||||||
|
|
||||||
TEST_SIZE = 10 * 1024
|
|
||||||
|
|
||||||
|
|
||||||
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
|
||||||
def log_message(self, format, *args):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def send_content_range(self, total=None):
|
|
||||||
range_header = self.headers.get('Range')
|
|
||||||
start = end = None
|
|
||||||
if range_header:
|
|
||||||
mobj = re.search(r'^bytes=(\d+)-(\d+)', range_header)
|
|
||||||
if mobj:
|
|
||||||
start = int(mobj.group(1))
|
|
||||||
end = int(mobj.group(2))
|
|
||||||
valid_range = start is not None and end is not None
|
|
||||||
if valid_range:
|
|
||||||
content_range = 'bytes %d-%d' % (start, end)
|
|
||||||
if total:
|
|
||||||
content_range += '/%d' % total
|
|
||||||
self.send_header('Content-Range', content_range)
|
|
||||||
return (end - start + 1) if valid_range else total
|
|
||||||
|
|
||||||
def serve(self, range=True, content_length=True):
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'video/mp4')
|
|
||||||
size = TEST_SIZE
|
|
||||||
if range:
|
|
||||||
size = self.send_content_range(TEST_SIZE)
|
|
||||||
if content_length:
|
|
||||||
self.send_header('Content-Length', size)
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(b'#' * size)
|
|
||||||
|
|
||||||
def do_GET(self):
|
|
||||||
if self.path == '/regular':
|
|
||||||
self.serve()
|
|
||||||
elif self.path == '/no-content-length':
|
|
||||||
self.serve(content_length=False)
|
|
||||||
elif self.path == '/no-range':
|
|
||||||
self.serve(range=False)
|
|
||||||
elif self.path == '/no-range-no-content-length':
|
|
||||||
self.serve(range=False, content_length=False)
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
|
|
||||||
|
|
||||||
class FakeLogger(object):
|
|
||||||
def debug(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def warning(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def error(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TestHttpFD(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.httpd = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), HTTPTestRequestHandler)
|
|
||||||
self.port = http_server_port(self.httpd)
|
|
||||||
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
|
|
||||||
self.server_thread.daemon = True
|
|
||||||
self.server_thread.start()
|
|
||||||
|
|
||||||
def download(self, params, ep):
|
|
||||||
params['logger'] = FakeLogger()
|
|
||||||
ydl = YoutubeDL(params)
|
|
||||||
downloader = HttpFD(ydl, params)
|
|
||||||
filename = 'testfile.mp4'
|
|
||||||
try_rm(encodeFilename(filename))
|
|
||||||
self.assertTrue(downloader.real_download(filename, {
|
|
||||||
'url': 'http://127.0.0.1:%d/%s' % (self.port, ep),
|
|
||||||
}))
|
|
||||||
self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE)
|
|
||||||
try_rm(encodeFilename(filename))
|
|
||||||
|
|
||||||
def download_all(self, params):
|
|
||||||
for ep in ('regular', 'no-content-length', 'no-range', 'no-range-no-content-length'):
|
|
||||||
self.download(params, ep)
|
|
||||||
|
|
||||||
def test_regular(self):
|
|
||||||
self.download_all({})
|
|
||||||
|
|
||||||
def test_chunked(self):
|
|
||||||
self.download_all({
|
|
||||||
'http_chunk_size': 1000,
|
|
||||||
})
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,44 +1,26 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import unittest
|
import unittest
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.utils import encodeArgument
|
|
||||||
|
|
||||||
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
_DEV_NULL = subprocess.DEVNULL
|
_DEV_NULL = subprocess.DEVNULL
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
_DEV_NULL = open(os.devnull, 'wb')
|
_DEV_NULL = open(os.devnull, 'wb')
|
||||||
|
|
||||||
|
|
||||||
class TestExecution(unittest.TestCase):
|
class TestExecution(unittest.TestCase):
|
||||||
def test_import(self):
|
def test_import(self):
|
||||||
subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir)
|
subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir)
|
||||||
|
|
||||||
def test_module_exec(self):
|
def test_module_exec(self):
|
||||||
if sys.version_info >= (2, 7): # Python 2.6 doesn't support package execution
|
if sys.version_info >= (2,7): # Python 2.6 doesn't support package execution
|
||||||
subprocess.check_call([sys.executable, '-m', 'youtube_dl', '--version'], cwd=rootDir, stdout=_DEV_NULL)
|
subprocess.check_call([sys.executable, '-m', 'youtube_dl', '--version'], cwd=rootDir, stdout=_DEV_NULL)
|
||||||
|
|
||||||
def test_main_exec(self):
|
def test_main_exec(self):
|
||||||
subprocess.check_call([sys.executable, 'youtube_dl/__main__.py', '--version'], cwd=rootDir, stdout=_DEV_NULL)
|
subprocess.check_call([sys.executable, 'youtube_dl/__main__.py', '--version'], cwd=rootDir, stdout=_DEV_NULL)
|
||||||
|
|
||||||
def test_cmdline_umlauts(self):
|
|
||||||
p = subprocess.Popen(
|
|
||||||
[sys.executable, 'youtube_dl/__main__.py', encodeArgument('ä'), '--version'],
|
|
||||||
cwd=rootDir, stdout=_DEV_NULL, stderr=subprocess.PIPE)
|
|
||||||
_, stderr = p.communicate()
|
|
||||||
self.assertFalse(stderr)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
@ -1,166 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from test.helper import http_server_port
|
|
||||||
from youtube_dl import YoutubeDL
|
|
||||||
from youtube_dl.compat import compat_http_server, compat_urllib_request
|
|
||||||
import ssl
|
|
||||||
import threading
|
|
||||||
|
|
||||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
|
|
||||||
|
|
||||||
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
|
||||||
def log_message(self, format, *args):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def do_GET(self):
|
|
||||||
if self.path == '/video.html':
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(b'<html><video src="/vid.mp4" /></html>')
|
|
||||||
elif self.path == '/vid.mp4':
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'video/mp4')
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(b'\x00\x00\x00\x00\x20\x66\x74[video]')
|
|
||||||
elif self.path == '/302':
|
|
||||||
if sys.version_info[0] == 3:
|
|
||||||
# XXX: Python 3 http server does not allow non-ASCII header values
|
|
||||||
self.send_response(404)
|
|
||||||
self.end_headers()
|
|
||||||
return
|
|
||||||
|
|
||||||
new_url = 'http://127.0.0.1:%d/中文.html' % http_server_port(self.server)
|
|
||||||
self.send_response(302)
|
|
||||||
self.send_header(b'Location', new_url.encode('utf-8'))
|
|
||||||
self.end_headers()
|
|
||||||
elif self.path == '/%E4%B8%AD%E6%96%87.html':
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(b'<html><video src="/vid.mp4" /></html>')
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
|
|
||||||
|
|
||||||
class FakeLogger(object):
|
|
||||||
def debug(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def warning(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def error(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TestHTTP(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.httpd = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), HTTPTestRequestHandler)
|
|
||||||
self.port = http_server_port(self.httpd)
|
|
||||||
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
|
|
||||||
self.server_thread.daemon = True
|
|
||||||
self.server_thread.start()
|
|
||||||
|
|
||||||
def test_unicode_path_redirection(self):
|
|
||||||
# XXX: Python 3 http server does not allow non-ASCII header values
|
|
||||||
if sys.version_info[0] == 3:
|
|
||||||
return
|
|
||||||
|
|
||||||
ydl = YoutubeDL({'logger': FakeLogger()})
|
|
||||||
r = ydl.extract_info('http://127.0.0.1:%d/302' % self.port)
|
|
||||||
self.assertEqual(r['entries'][0]['url'], 'http://127.0.0.1:%d/vid.mp4' % self.port)
|
|
||||||
|
|
||||||
|
|
||||||
class TestHTTPS(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
certfn = os.path.join(TEST_DIR, 'testcert.pem')
|
|
||||||
self.httpd = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), HTTPTestRequestHandler)
|
|
||||||
self.httpd.socket = ssl.wrap_socket(
|
|
||||||
self.httpd.socket, certfile=certfn, server_side=True)
|
|
||||||
self.port = http_server_port(self.httpd)
|
|
||||||
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
|
|
||||||
self.server_thread.daemon = True
|
|
||||||
self.server_thread.start()
|
|
||||||
|
|
||||||
def test_nocheckcertificate(self):
|
|
||||||
if sys.version_info >= (2, 7, 9): # No certificate checking anyways
|
|
||||||
ydl = YoutubeDL({'logger': FakeLogger()})
|
|
||||||
self.assertRaises(
|
|
||||||
Exception,
|
|
||||||
ydl.extract_info, 'https://127.0.0.1:%d/video.html' % self.port)
|
|
||||||
|
|
||||||
ydl = YoutubeDL({'logger': FakeLogger(), 'nocheckcertificate': True})
|
|
||||||
r = ydl.extract_info('https://127.0.0.1:%d/video.html' % self.port)
|
|
||||||
self.assertEqual(r['entries'][0]['url'], 'https://127.0.0.1:%d/vid.mp4' % self.port)
|
|
||||||
|
|
||||||
|
|
||||||
def _build_proxy_handler(name):
|
|
||||||
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
|
||||||
proxy_name = name
|
|
||||||
|
|
||||||
def log_message(self, format, *args):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def do_GET(self):
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'text/plain; charset=utf-8')
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write('{self.proxy_name}: {self.path}'.format(self=self).encode('utf-8'))
|
|
||||||
return HTTPTestRequestHandler
|
|
||||||
|
|
||||||
|
|
||||||
class TestProxy(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.proxy = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), _build_proxy_handler('normal'))
|
|
||||||
self.port = http_server_port(self.proxy)
|
|
||||||
self.proxy_thread = threading.Thread(target=self.proxy.serve_forever)
|
|
||||||
self.proxy_thread.daemon = True
|
|
||||||
self.proxy_thread.start()
|
|
||||||
|
|
||||||
self.geo_proxy = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), _build_proxy_handler('geo'))
|
|
||||||
self.geo_port = http_server_port(self.geo_proxy)
|
|
||||||
self.geo_proxy_thread = threading.Thread(target=self.geo_proxy.serve_forever)
|
|
||||||
self.geo_proxy_thread.daemon = True
|
|
||||||
self.geo_proxy_thread.start()
|
|
||||||
|
|
||||||
def test_proxy(self):
|
|
||||||
geo_proxy = '127.0.0.1:{0}'.format(self.geo_port)
|
|
||||||
ydl = YoutubeDL({
|
|
||||||
'proxy': '127.0.0.1:{0}'.format(self.port),
|
|
||||||
'geo_verification_proxy': geo_proxy,
|
|
||||||
})
|
|
||||||
url = 'http://foo.com/bar'
|
|
||||||
response = ydl.urlopen(url).read().decode('utf-8')
|
|
||||||
self.assertEqual(response, 'normal: {0}'.format(url))
|
|
||||||
|
|
||||||
req = compat_urllib_request.Request(url)
|
|
||||||
req.add_header('Ytdl-request-proxy', geo_proxy)
|
|
||||||
response = ydl.urlopen(req).read().decode('utf-8')
|
|
||||||
self.assertEqual(response, 'geo: {0}'.format(url))
|
|
||||||
|
|
||||||
def test_proxy_with_idn(self):
|
|
||||||
ydl = YoutubeDL({
|
|
||||||
'proxy': '127.0.0.1:{0}'.format(self.port),
|
|
||||||
})
|
|
||||||
url = 'http://中文.tw/'
|
|
||||||
response = ydl.urlopen(url).read().decode('utf-8')
|
|
||||||
# b'xn--fiq228c' is '中文'.encode('idna')
|
|
||||||
self.assertEqual(response, 'normal: http://xn--fiq228c.tw/')
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,48 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from test.helper import FakeYDL
|
|
||||||
from youtube_dl.extractor import IqiyiIE
|
|
||||||
|
|
||||||
|
|
||||||
class IqiyiIEWithCredentials(IqiyiIE):
|
|
||||||
def _get_login_info(self):
|
|
||||||
return 'foo', 'bar'
|
|
||||||
|
|
||||||
|
|
||||||
class WarningLogger(object):
|
|
||||||
def __init__(self):
|
|
||||||
self.messages = []
|
|
||||||
|
|
||||||
def warning(self, msg):
|
|
||||||
self.messages.append(msg)
|
|
||||||
|
|
||||||
def debug(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def error(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TestIqiyiSDKInterpreter(unittest.TestCase):
|
|
||||||
def test_iqiyi_sdk_interpreter(self):
|
|
||||||
'''
|
|
||||||
Test the functionality of IqiyiSDKInterpreter by trying to log in
|
|
||||||
|
|
||||||
If `sign` is incorrect, /validate call throws an HTTP 556 error
|
|
||||||
'''
|
|
||||||
logger = WarningLogger()
|
|
||||||
ie = IqiyiIEWithCredentials(FakeYDL({'logger': logger}))
|
|
||||||
ie._login()
|
|
||||||
self.assertTrue('unable to log in:' in logger.messages[0])
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,117 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.jsinterp import JSInterpreter
|
|
||||||
|
|
||||||
|
|
||||||
class TestJSInterpreter(unittest.TestCase):
|
|
||||||
def test_basic(self):
|
|
||||||
jsi = JSInterpreter('function x(){;}')
|
|
||||||
self.assertEqual(jsi.call_function('x'), None)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function x3(){return 42;}')
|
|
||||||
self.assertEqual(jsi.call_function('x3'), 42)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('var x5 = function(){return 42;}')
|
|
||||||
self.assertEqual(jsi.call_function('x5'), 42)
|
|
||||||
|
|
||||||
def test_calc(self):
|
|
||||||
jsi = JSInterpreter('function x4(a){return 2*a+1;}')
|
|
||||||
self.assertEqual(jsi.call_function('x4', 3), 7)
|
|
||||||
|
|
||||||
def test_empty_return(self):
|
|
||||||
jsi = JSInterpreter('function f(){return; y()}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), None)
|
|
||||||
|
|
||||||
def test_morespace(self):
|
|
||||||
jsi = JSInterpreter('function x (a) { return 2 * a + 1 ; }')
|
|
||||||
self.assertEqual(jsi.call_function('x', 3), 7)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f () { x = 2 ; return x; }')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 2)
|
|
||||||
|
|
||||||
def test_strange_chars(self):
|
|
||||||
jsi = JSInterpreter('function $_xY1 ($_axY1) { var $_axY2 = $_axY1 + 1; return $_axY2; }')
|
|
||||||
self.assertEqual(jsi.call_function('$_xY1', 20), 21)
|
|
||||||
|
|
||||||
def test_operators(self):
|
|
||||||
jsi = JSInterpreter('function f(){return 1 << 5;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 32)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return 19 & 21;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 17)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return 11 >> 2;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 2)
|
|
||||||
|
|
||||||
def test_array_access(self):
|
|
||||||
jsi = JSInterpreter('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2] = 7; return x;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), [5, 2, 7])
|
|
||||||
|
|
||||||
def test_parens(self):
|
|
||||||
jsi = JSInterpreter('function f(){return (1) + (2) * ((( (( (((((3)))))) )) ));}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 7)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return (1 + 2) * 3;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 9)
|
|
||||||
|
|
||||||
def test_assignments(self):
|
|
||||||
jsi = JSInterpreter('function f(){var x = 20; x = 30 + 1; return x;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 31)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){var x = 20; x += 30 + 1; return x;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 51)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){var x = 20; x -= 30 + 1; return x;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), -11)
|
|
||||||
|
|
||||||
def test_comments(self):
|
|
||||||
'Skipping: Not yet fully implemented'
|
|
||||||
return
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() {
|
|
||||||
var x = /* 1 + */ 2;
|
|
||||||
var y = /* 30
|
|
||||||
* 40 */ 50;
|
|
||||||
return x + y;
|
|
||||||
}
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 52)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function f() {
|
|
||||||
var x = "/*";
|
|
||||||
var y = 1 /* comment */ + 2;
|
|
||||||
return y;
|
|
||||||
}
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 3)
|
|
||||||
|
|
||||||
def test_precedence(self):
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() {
|
|
||||||
var a = [10, 20, 30, 40, 50];
|
|
||||||
var b = 6;
|
|
||||||
a[0]=a[b%a.length];
|
|
||||||
return a;
|
|
||||||
}''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [20, 20, 30, 40, 50])
|
|
||||||
|
|
||||||
def test_call(self):
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return 2; }
|
|
||||||
function y(a) { return x() + a; }
|
|
||||||
function z() { return y(3); }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('z'), 5)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,26 +0,0 @@
|
|||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
|
|
||||||
from youtube_dl.extractor import (
|
|
||||||
gen_extractors,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestNetRc(unittest.TestCase):
|
|
||||||
def test_netrc_present(self):
|
|
||||||
for ie in gen_extractors():
|
|
||||||
if not hasattr(ie, '_login'):
|
|
||||||
continue
|
|
||||||
self.assertTrue(
|
|
||||||
hasattr(ie, '_NETRC_MACHINE'),
|
|
||||||
'Extractor %s supports login, but is missing a _NETRC_MACHINE property' % ie.IE_NAME)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,26 +0,0 @@
|
|||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.options import _hide_login_info
|
|
||||||
|
|
||||||
|
|
||||||
class TestOptions(unittest.TestCase):
|
|
||||||
def test_hide_login_info(self):
|
|
||||||
self.assertEqual(_hide_login_info(['-u', 'foo', '-p', 'bar']),
|
|
||||||
['-u', 'PRIVATE', '-p', 'PRIVATE'])
|
|
||||||
self.assertEqual(_hide_login_info(['-u']), ['-u'])
|
|
||||||
self.assertEqual(_hide_login_info(['-u', 'foo', '-u', 'bar']),
|
|
||||||
['-u', 'PRIVATE', '-u', 'PRIVATE'])
|
|
||||||
self.assertEqual(_hide_login_info(['--username=foo']),
|
|
||||||
['--username=PRIVATE'])
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
115
test/test_playlists.py
Normal file
115
test/test_playlists.py
Normal file
@ -0,0 +1,115 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# encoding: utf-8
|
||||||
|
|
||||||
|
|
||||||
|
# Allow direct execution
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import unittest
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
from test.helper import FakeYDL
|
||||||
|
|
||||||
|
|
||||||
|
from youtube_dl.extractor import (
|
||||||
|
DailymotionPlaylistIE,
|
||||||
|
DailymotionUserIE,
|
||||||
|
VimeoChannelIE,
|
||||||
|
UstreamChannelIE,
|
||||||
|
SoundcloudSetIE,
|
||||||
|
SoundcloudUserIE,
|
||||||
|
LivestreamIE,
|
||||||
|
NHLVideocenterIE,
|
||||||
|
BambuserChannelIE,
|
||||||
|
BandcampAlbumIE
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestPlaylists(unittest.TestCase):
|
||||||
|
def assertIsPlaylist(self, info):
|
||||||
|
"""Make sure the info has '_type' set to 'playlist'"""
|
||||||
|
self.assertEqual(info['_type'], 'playlist')
|
||||||
|
|
||||||
|
def test_dailymotion_playlist(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = DailymotionPlaylistIE(dl)
|
||||||
|
result = ie.extract('http://www.dailymotion.com/playlist/xv4bw_nqtv_sport/1#video=xl8v3q')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['title'], u'SPORT')
|
||||||
|
self.assertTrue(len(result['entries']) > 20)
|
||||||
|
|
||||||
|
def test_dailymotion_user(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = DailymotionUserIE(dl)
|
||||||
|
result = ie.extract('http://www.dailymotion.com/user/generation-quoi/')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['title'], u'Génération Quoi')
|
||||||
|
self.assertTrue(len(result['entries']) >= 26)
|
||||||
|
|
||||||
|
def test_vimeo_channel(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = VimeoChannelIE(dl)
|
||||||
|
result = ie.extract('http://vimeo.com/channels/tributes')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['title'], u'Vimeo Tributes')
|
||||||
|
self.assertTrue(len(result['entries']) > 24)
|
||||||
|
|
||||||
|
def test_ustream_channel(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = UstreamChannelIE(dl)
|
||||||
|
result = ie.extract('http://www.ustream.tv/channel/young-americans-for-liberty')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['id'], u'5124905')
|
||||||
|
self.assertTrue(len(result['entries']) >= 11)
|
||||||
|
|
||||||
|
def test_soundcloud_set(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = SoundcloudSetIE(dl)
|
||||||
|
result = ie.extract('https://soundcloud.com/the-concept-band/sets/the-royal-concept-ep')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['title'], u'The Royal Concept EP')
|
||||||
|
self.assertTrue(len(result['entries']) >= 6)
|
||||||
|
|
||||||
|
def test_soundcloud_user(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = SoundcloudUserIE(dl)
|
||||||
|
result = ie.extract('https://soundcloud.com/the-concept-band')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['id'], u'9615865')
|
||||||
|
self.assertTrue(len(result['entries']) >= 12)
|
||||||
|
|
||||||
|
def test_livestream_event(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = LivestreamIE(dl)
|
||||||
|
result = ie.extract('http://new.livestream.com/tedx/cityenglish')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['title'], u'TEDCity2.0 (English)')
|
||||||
|
self.assertTrue(len(result['entries']) >= 4)
|
||||||
|
|
||||||
|
def test_nhl_videocenter(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = NHLVideocenterIE(dl)
|
||||||
|
result = ie.extract('http://video.canucks.nhl.com/videocenter/console?catid=999')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['id'], u'999')
|
||||||
|
self.assertEqual(result['title'], u'Highlights')
|
||||||
|
self.assertEqual(len(result['entries']), 12)
|
||||||
|
|
||||||
|
def test_bambuser_channel(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = BambuserChannelIE(dl)
|
||||||
|
result = ie.extract('http://bambuser.com/channel/pixelversity')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['title'], u'pixelversity')
|
||||||
|
self.assertTrue(len(result['entries']) >= 60)
|
||||||
|
|
||||||
|
def test_bandcamp_album(self):
|
||||||
|
dl = FakeYDL()
|
||||||
|
ie = BandcampAlbumIE(dl)
|
||||||
|
result = ie.extract('http://mpallante.bandcamp.com/album/nightmare-night-ep')
|
||||||
|
self.assertIsPlaylist(result)
|
||||||
|
self.assertEqual(result['title'], u'Nightmare Night EP')
|
||||||
|
self.assertTrue(len(result['entries']) >= 4)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
@ -1,17 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.postprocessor import MetadataFromTitlePP
|
|
||||||
|
|
||||||
|
|
||||||
class TestMetadataFromTitle(unittest.TestCase):
|
|
||||||
def test_format_to_regex(self):
|
|
||||||
pp = MetadataFromTitlePP(None, '%(title)s - %(artist)s')
|
|
||||||
self.assertEqual(pp._titleregex, r'(?P<title>.+)\ \-\ (?P<artist>.+)')
|
|
@ -1,118 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
import random
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
from test.helper import (
|
|
||||||
FakeYDL,
|
|
||||||
get_params,
|
|
||||||
)
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_str,
|
|
||||||
compat_urllib_request,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestMultipleSocks(unittest.TestCase):
|
|
||||||
@staticmethod
|
|
||||||
def _check_params(attrs):
|
|
||||||
params = get_params()
|
|
||||||
for attr in attrs:
|
|
||||||
if attr not in params:
|
|
||||||
print('Missing %s. Skipping.' % attr)
|
|
||||||
return
|
|
||||||
return params
|
|
||||||
|
|
||||||
def test_proxy_http(self):
|
|
||||||
params = self._check_params(['primary_proxy', 'primary_server_ip'])
|
|
||||||
if params is None:
|
|
||||||
return
|
|
||||||
ydl = FakeYDL({
|
|
||||||
'proxy': params['primary_proxy']
|
|
||||||
})
|
|
||||||
self.assertEqual(
|
|
||||||
ydl.urlopen('http://yt-dl.org/ip').read().decode('utf-8'),
|
|
||||||
params['primary_server_ip'])
|
|
||||||
|
|
||||||
def test_proxy_https(self):
|
|
||||||
params = self._check_params(['primary_proxy', 'primary_server_ip'])
|
|
||||||
if params is None:
|
|
||||||
return
|
|
||||||
ydl = FakeYDL({
|
|
||||||
'proxy': params['primary_proxy']
|
|
||||||
})
|
|
||||||
self.assertEqual(
|
|
||||||
ydl.urlopen('https://yt-dl.org/ip').read().decode('utf-8'),
|
|
||||||
params['primary_server_ip'])
|
|
||||||
|
|
||||||
def test_secondary_proxy_http(self):
|
|
||||||
params = self._check_params(['secondary_proxy', 'secondary_server_ip'])
|
|
||||||
if params is None:
|
|
||||||
return
|
|
||||||
ydl = FakeYDL()
|
|
||||||
req = compat_urllib_request.Request('http://yt-dl.org/ip')
|
|
||||||
req.add_header('Ytdl-request-proxy', params['secondary_proxy'])
|
|
||||||
self.assertEqual(
|
|
||||||
ydl.urlopen(req).read().decode('utf-8'),
|
|
||||||
params['secondary_server_ip'])
|
|
||||||
|
|
||||||
def test_secondary_proxy_https(self):
|
|
||||||
params = self._check_params(['secondary_proxy', 'secondary_server_ip'])
|
|
||||||
if params is None:
|
|
||||||
return
|
|
||||||
ydl = FakeYDL()
|
|
||||||
req = compat_urllib_request.Request('https://yt-dl.org/ip')
|
|
||||||
req.add_header('Ytdl-request-proxy', params['secondary_proxy'])
|
|
||||||
self.assertEqual(
|
|
||||||
ydl.urlopen(req).read().decode('utf-8'),
|
|
||||||
params['secondary_server_ip'])
|
|
||||||
|
|
||||||
|
|
||||||
class TestSocks(unittest.TestCase):
|
|
||||||
_SKIP_SOCKS_TEST = True
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
if self._SKIP_SOCKS_TEST:
|
|
||||||
return
|
|
||||||
|
|
||||||
self.port = random.randint(20000, 30000)
|
|
||||||
self.server_process = subprocess.Popen([
|
|
||||||
'srelay', '-f', '-i', '127.0.0.1:%d' % self.port],
|
|
||||||
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
if self._SKIP_SOCKS_TEST:
|
|
||||||
return
|
|
||||||
|
|
||||||
self.server_process.terminate()
|
|
||||||
self.server_process.communicate()
|
|
||||||
|
|
||||||
def _get_ip(self, protocol):
|
|
||||||
if self._SKIP_SOCKS_TEST:
|
|
||||||
return '127.0.0.1'
|
|
||||||
|
|
||||||
ydl = FakeYDL({
|
|
||||||
'proxy': '%s://127.0.0.1:%d' % (protocol, self.port),
|
|
||||||
})
|
|
||||||
return ydl.urlopen('http://yt-dl.org/ip').read().decode('utf-8')
|
|
||||||
|
|
||||||
def test_socks4(self):
|
|
||||||
self.assertTrue(isinstance(self._get_ip('socks4'), compat_str))
|
|
||||||
|
|
||||||
def test_socks4a(self):
|
|
||||||
self.assertTrue(isinstance(self._get_ip('socks4a'), compat_str))
|
|
||||||
|
|
||||||
def test_socks5(self):
|
|
||||||
self.assertTrue(isinstance(self._get_ip('socks5'), compat_str))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
@ -1,5 +1,4 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
# Allow direct execution
|
||||||
import os
|
import os
|
||||||
@ -14,72 +13,72 @@ from youtube_dl.extractor import (
|
|||||||
YoutubeIE,
|
YoutubeIE,
|
||||||
DailymotionIE,
|
DailymotionIE,
|
||||||
TEDIE,
|
TEDIE,
|
||||||
VimeoIE,
|
|
||||||
WallaIE,
|
|
||||||
CeskaTelevizeIE,
|
|
||||||
LyndaIE,
|
|
||||||
NPOIE,
|
|
||||||
ComedyCentralIE,
|
|
||||||
NRKTVIE,
|
|
||||||
RaiPlayIE,
|
|
||||||
VikiIE,
|
|
||||||
ThePlatformIE,
|
|
||||||
ThePlatformFeedIE,
|
|
||||||
RTVEALaCartaIE,
|
|
||||||
DemocracynowIE,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class BaseTestSubtitles(unittest.TestCase):
|
class BaseTestSubtitles(unittest.TestCase):
|
||||||
url = None
|
url = None
|
||||||
IE = None
|
IE = None
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.DL = FakeYDL()
|
self.DL = FakeYDL()
|
||||||
self.ie = self.IE()
|
self.ie = self.IE(self.DL)
|
||||||
self.DL.add_info_extractor(self.ie)
|
|
||||||
|
|
||||||
def getInfoDict(self):
|
def getInfoDict(self):
|
||||||
info_dict = self.DL.extract_info(self.url, download=False)
|
info_dict = self.ie.extract(self.url)
|
||||||
return info_dict
|
return info_dict
|
||||||
|
|
||||||
def getSubtitles(self):
|
def getSubtitles(self):
|
||||||
info_dict = self.getInfoDict()
|
info_dict = self.getInfoDict()
|
||||||
subtitles = info_dict['requested_subtitles']
|
return info_dict['subtitles']
|
||||||
if not subtitles:
|
|
||||||
return subtitles
|
|
||||||
for sub_info in subtitles.values():
|
|
||||||
if sub_info.get('data') is None:
|
|
||||||
uf = self.DL.urlopen(sub_info['url'])
|
|
||||||
sub_info['data'] = uf.read().decode('utf-8')
|
|
||||||
return dict((l, sub_info['data']) for l, sub_info in subtitles.items())
|
|
||||||
|
|
||||||
|
|
||||||
class TestYoutubeSubtitles(BaseTestSubtitles):
|
class TestYoutubeSubtitles(BaseTestSubtitles):
|
||||||
url = 'QRS8MkLhQmM'
|
url = 'QRS8MkLhQmM'
|
||||||
IE = YoutubeIE
|
IE = YoutubeIE
|
||||||
|
|
||||||
|
def getSubtitles(self):
|
||||||
|
info_dict = self.getInfoDict()
|
||||||
|
return info_dict[0]['subtitles']
|
||||||
|
|
||||||
|
def test_youtube_no_writesubtitles(self):
|
||||||
|
self.DL.params['writesubtitles'] = False
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertEqual(subtitles, None)
|
||||||
|
|
||||||
|
def test_youtube_subtitles(self):
|
||||||
|
self.DL.params['writesubtitles'] = True
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertEqual(md5(subtitles['en']), '4cd9278a35ba2305f47354ee13472260')
|
||||||
|
|
||||||
|
def test_youtube_subtitles_lang(self):
|
||||||
|
self.DL.params['writesubtitles'] = True
|
||||||
|
self.DL.params['subtitleslangs'] = ['it']
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertEqual(md5(subtitles['it']), '164a51f16f260476a05b50fe4c2f161d')
|
||||||
|
|
||||||
def test_youtube_allsubtitles(self):
|
def test_youtube_allsubtitles(self):
|
||||||
self.DL.params['writesubtitles'] = True
|
self.DL.params['writesubtitles'] = True
|
||||||
self.DL.params['allsubtitles'] = True
|
self.DL.params['allsubtitles'] = True
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertEqual(len(subtitles.keys()), 13)
|
self.assertEqual(len(subtitles.keys()), 13)
|
||||||
self.assertEqual(md5(subtitles['en']), '3cb210999d3e021bd6c7f0ea751eab06')
|
|
||||||
self.assertEqual(md5(subtitles['it']), '6d752b98c31f1cf8d597050c7a2cb4b5')
|
|
||||||
for lang in ['fr', 'de']:
|
|
||||||
self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
|
|
||||||
|
|
||||||
def test_youtube_subtitles_ttml_format(self):
|
def test_youtube_subtitles_sbv_format(self):
|
||||||
self.DL.params['writesubtitles'] = True
|
self.DL.params['writesubtitles'] = True
|
||||||
self.DL.params['subtitlesformat'] = 'ttml'
|
self.DL.params['subtitlesformat'] = 'sbv'
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertEqual(md5(subtitles['en']), 'e306f8c42842f723447d9f63ad65df54')
|
self.assertEqual(md5(subtitles['en']), '13aeaa0c245a8bed9a451cb643e3ad8b')
|
||||||
|
|
||||||
def test_youtube_subtitles_vtt_format(self):
|
def test_youtube_subtitles_vtt_format(self):
|
||||||
self.DL.params['writesubtitles'] = True
|
self.DL.params['writesubtitles'] = True
|
||||||
self.DL.params['subtitlesformat'] = 'vtt'
|
self.DL.params['subtitlesformat'] = 'vtt'
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertEqual(md5(subtitles['en']), '3cb210999d3e021bd6c7f0ea751eab06')
|
self.assertEqual(md5(subtitles['en']), '356cdc577fde0c6783b9b822e7206ff7')
|
||||||
|
|
||||||
|
def test_youtube_list_subtitles(self):
|
||||||
|
self.DL.expect_warning(u'Video doesn\'t have automatic captions')
|
||||||
|
self.DL.params['listsubtitles'] = True
|
||||||
|
info_dict = self.getInfoDict()
|
||||||
|
self.assertEqual(info_dict, None)
|
||||||
|
|
||||||
def test_youtube_automatic_captions(self):
|
def test_youtube_automatic_captions(self):
|
||||||
self.url = '8YoUxe5ncPo'
|
self.url = '8YoUxe5ncPo'
|
||||||
@ -88,258 +87,124 @@ class TestYoutubeSubtitles(BaseTestSubtitles):
|
|||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertTrue(subtitles['it'] is not None)
|
self.assertTrue(subtitles['it'] is not None)
|
||||||
|
|
||||||
def test_youtube_translated_subtitles(self):
|
|
||||||
# This video has a subtitles track, which can be translated
|
|
||||||
self.url = 'Ky9eprVWzlI'
|
|
||||||
self.DL.params['writeautomaticsub'] = True
|
|
||||||
self.DL.params['subtitleslangs'] = ['it']
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertTrue(subtitles['it'] is not None)
|
|
||||||
|
|
||||||
def test_youtube_nosubtitles(self):
|
def test_youtube_nosubtitles(self):
|
||||||
self.DL.expect_warning('video doesn\'t have subtitles')
|
self.DL.expect_warning(u'video doesn\'t have subtitles')
|
||||||
self.url = 'n5BB19UTcdA'
|
self.url = 'sAjKT8FhjI8'
|
||||||
self.DL.params['writesubtitles'] = True
|
self.DL.params['writesubtitles'] = True
|
||||||
self.DL.params['allsubtitles'] = True
|
self.DL.params['allsubtitles'] = True
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertFalse(subtitles)
|
self.assertEqual(len(subtitles), 0)
|
||||||
|
|
||||||
|
def test_youtube_multiple_langs(self):
|
||||||
|
self.url = 'QRS8MkLhQmM'
|
||||||
|
self.DL.params['writesubtitles'] = True
|
||||||
|
langs = ['it', 'fr', 'de']
|
||||||
|
self.DL.params['subtitleslangs'] = langs
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
for lang in langs:
|
||||||
|
self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang)
|
||||||
|
|
||||||
|
|
||||||
class TestDailymotionSubtitles(BaseTestSubtitles):
|
class TestDailymotionSubtitles(BaseTestSubtitles):
|
||||||
url = 'http://www.dailymotion.com/video/xczg00'
|
url = 'http://www.dailymotion.com/video/xczg00'
|
||||||
IE = DailymotionIE
|
IE = DailymotionIE
|
||||||
|
|
||||||
|
def test_no_writesubtitles(self):
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertEqual(subtitles, None)
|
||||||
|
|
||||||
|
def test_subtitles(self):
|
||||||
|
self.DL.params['writesubtitles'] = True
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertEqual(md5(subtitles['en']), '976553874490cba125086bbfea3ff76f')
|
||||||
|
|
||||||
|
def test_subtitles_lang(self):
|
||||||
|
self.DL.params['writesubtitles'] = True
|
||||||
|
self.DL.params['subtitleslangs'] = ['fr']
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertEqual(md5(subtitles['fr']), '594564ec7d588942e384e920e5341792')
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
def test_allsubtitles(self):
|
||||||
self.DL.params['writesubtitles'] = True
|
self.DL.params['writesubtitles'] = True
|
||||||
self.DL.params['allsubtitles'] = True
|
self.DL.params['allsubtitles'] = True
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertTrue(len(subtitles.keys()) >= 6)
|
self.assertEqual(len(subtitles.keys()), 5)
|
||||||
self.assertEqual(md5(subtitles['en']), '976553874490cba125086bbfea3ff76f')
|
|
||||||
self.assertEqual(md5(subtitles['fr']), '594564ec7d588942e384e920e5341792')
|
def test_list_subtitles(self):
|
||||||
for lang in ['es', 'fr', 'de']:
|
self.DL.expect_warning(u'Automatic Captions not supported by this server')
|
||||||
self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
|
self.DL.params['listsubtitles'] = True
|
||||||
|
info_dict = self.getInfoDict()
|
||||||
|
self.assertEqual(info_dict, None)
|
||||||
|
|
||||||
|
def test_automatic_captions(self):
|
||||||
|
self.DL.expect_warning(u'Automatic Captions not supported by this server')
|
||||||
|
self.DL.params['writeautomaticsub'] = True
|
||||||
|
self.DL.params['subtitleslang'] = ['en']
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertTrue(len(subtitles.keys()) == 0)
|
||||||
|
|
||||||
def test_nosubtitles(self):
|
def test_nosubtitles(self):
|
||||||
self.DL.expect_warning('video doesn\'t have subtitles')
|
self.DL.expect_warning(u'video doesn\'t have subtitles')
|
||||||
self.url = 'http://www.dailymotion.com/video/x12u166_le-zapping-tele-star-du-08-aout-2013_tv'
|
self.url = 'http://www.dailymotion.com/video/x12u166_le-zapping-tele-star-du-08-aout-2013_tv'
|
||||||
self.DL.params['writesubtitles'] = True
|
self.DL.params['writesubtitles'] = True
|
||||||
self.DL.params['allsubtitles'] = True
|
self.DL.params['allsubtitles'] = True
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertFalse(subtitles)
|
self.assertEqual(len(subtitles), 0)
|
||||||
|
|
||||||
|
def test_multiple_langs(self):
|
||||||
|
self.DL.params['writesubtitles'] = True
|
||||||
|
langs = ['es', 'fr', 'de']
|
||||||
|
self.DL.params['subtitleslangs'] = langs
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
for lang in langs:
|
||||||
|
self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang)
|
||||||
|
|
||||||
|
|
||||||
class TestTedSubtitles(BaseTestSubtitles):
|
class TestTedSubtitles(BaseTestSubtitles):
|
||||||
url = 'http://www.ted.com/talks/dan_dennett_on_our_consciousness.html'
|
url = 'http://www.ted.com/talks/dan_dennett_on_our_consciousness.html'
|
||||||
IE = TEDIE
|
IE = TEDIE
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
def test_no_writesubtitles(self):
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertTrue(len(subtitles.keys()) >= 28)
|
self.assertEqual(subtitles, None)
|
||||||
self.assertEqual(md5(subtitles['en']), '4262c1665ff928a2dada178f62cb8d14')
|
|
||||||
self.assertEqual(md5(subtitles['fr']), '66a63f7f42c97a50f8c0e90bc7797bb5')
|
|
||||||
for lang in ['es', 'fr', 'de']:
|
|
||||||
self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
|
|
||||||
|
|
||||||
|
def test_subtitles(self):
|
||||||
|
self.DL.params['writesubtitles'] = True
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertEqual(md5(subtitles['en']), '2154f31ff9b9f89a0aa671537559c21d')
|
||||||
|
|
||||||
class TestVimeoSubtitles(BaseTestSubtitles):
|
def test_subtitles_lang(self):
|
||||||
url = 'http://vimeo.com/76979871'
|
self.DL.params['writesubtitles'] = True
|
||||||
IE = VimeoIE
|
self.DL.params['subtitleslangs'] = ['fr']
|
||||||
|
subtitles = self.getSubtitles()
|
||||||
|
self.assertEqual(md5(subtitles['fr']), '7616cbc6df20ec2c1204083c83871cf6')
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
def test_allsubtitles(self):
|
||||||
self.DL.params['writesubtitles'] = True
|
self.DL.params['writesubtitles'] = True
|
||||||
self.DL.params['allsubtitles'] = True
|
self.DL.params['allsubtitles'] = True
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertEqual(set(subtitles.keys()), set(['de', 'en', 'es', 'fr']))
|
self.assertEqual(len(subtitles.keys()), 28)
|
||||||
self.assertEqual(md5(subtitles['en']), '8062383cf4dec168fc40a088aa6d5888')
|
|
||||||
self.assertEqual(md5(subtitles['fr']), 'b6191146a6c5d3a452244d853fde6dc8')
|
|
||||||
|
|
||||||
def test_nosubtitles(self):
|
def test_list_subtitles(self):
|
||||||
self.DL.expect_warning('video doesn\'t have subtitles')
|
self.DL.expect_warning(u'Automatic Captions not supported by this server')
|
||||||
self.url = 'http://vimeo.com/56015672'
|
self.DL.params['listsubtitles'] = True
|
||||||
self.DL.params['writesubtitles'] = True
|
info_dict = self.getInfoDict()
|
||||||
self.DL.params['allsubtitles'] = True
|
self.assertEqual(info_dict, None)
|
||||||
|
|
||||||
|
def test_automatic_captions(self):
|
||||||
|
self.DL.expect_warning(u'Automatic Captions not supported by this server')
|
||||||
|
self.DL.params['writeautomaticsub'] = True
|
||||||
|
self.DL.params['subtitleslang'] = ['en']
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertFalse(subtitles)
|
self.assertTrue(len(subtitles.keys()) == 0)
|
||||||
|
|
||||||
|
def test_multiple_langs(self):
|
||||||
class TestWallaSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://vod.walla.co.il/movie/2705958/the-yes-men'
|
|
||||||
IE = WallaIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.expect_warning('Automatic Captions not supported by this server')
|
|
||||||
self.DL.params['writesubtitles'] = True
|
self.DL.params['writesubtitles'] = True
|
||||||
self.DL.params['allsubtitles'] = True
|
langs = ['es', 'fr', 'de']
|
||||||
|
self.DL.params['subtitleslangs'] = langs
|
||||||
subtitles = self.getSubtitles()
|
subtitles = self.getSubtitles()
|
||||||
self.assertEqual(set(subtitles.keys()), set(['heb']))
|
for lang in langs:
|
||||||
self.assertEqual(md5(subtitles['heb']), 'e758c5d7cb982f6bef14f377ec7a3920')
|
self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang)
|
||||||
|
|
||||||
def test_nosubtitles(self):
|
|
||||||
self.DL.expect_warning('video doesn\'t have subtitles')
|
|
||||||
self.url = 'http://vod.walla.co.il/movie/2642630/one-direction-all-for-one'
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertFalse(subtitles)
|
|
||||||
|
|
||||||
|
|
||||||
class TestCeskaTelevizeSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://www.ceskatelevize.cz/ivysilani/10600540290-u6-uzasny-svet-techniky'
|
|
||||||
IE = CeskaTelevizeIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.expect_warning('Automatic Captions not supported by this server')
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['cs']))
|
|
||||||
self.assertTrue(len(subtitles['cs']) > 20000)
|
|
||||||
|
|
||||||
def test_nosubtitles(self):
|
|
||||||
self.DL.expect_warning('video doesn\'t have subtitles')
|
|
||||||
self.url = 'http://www.ceskatelevize.cz/ivysilani/ivysilani/10441294653-hyde-park-civilizace/214411058091220'
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertFalse(subtitles)
|
|
||||||
|
|
||||||
|
|
||||||
class TestLyndaSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://www.lynda.com/Bootstrap-tutorials/Using-exercise-files/110885/114408-4.html'
|
|
||||||
IE = LyndaIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['en']))
|
|
||||||
self.assertEqual(md5(subtitles['en']), '09bbe67222259bed60deaa26997d73a7')
|
|
||||||
|
|
||||||
|
|
||||||
class TestNPOSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://www.npo.nl/nos-journaal/28-08-2014/POW_00722860'
|
|
||||||
IE = NPOIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['nl']))
|
|
||||||
self.assertEqual(md5(subtitles['nl']), 'fc6435027572b63fb4ab143abd5ad3f4')
|
|
||||||
|
|
||||||
|
|
||||||
class TestMTVSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://www.cc.com/video-clips/p63lk0/adam-devine-s-house-party-chasing-white-swans'
|
|
||||||
IE = ComedyCentralIE
|
|
||||||
|
|
||||||
def getInfoDict(self):
|
|
||||||
return super(TestMTVSubtitles, self).getInfoDict()['entries'][0]
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['en']))
|
|
||||||
self.assertEqual(md5(subtitles['en']), '78206b8d8a0cfa9da64dc026eea48961')
|
|
||||||
|
|
||||||
|
|
||||||
class TestNRKSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://tv.nrk.no/serie/ikke-gjoer-dette-hjemme/DMPV73000411/sesong-2/episode-1'
|
|
||||||
IE = NRKTVIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['no']))
|
|
||||||
self.assertEqual(md5(subtitles['no']), '544fa917d3197fcbee64634559221cc2')
|
|
||||||
|
|
||||||
|
|
||||||
class TestRaiPlaySubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://www.raiplay.it/video/2014/04/Report-del-07042014-cb27157f-9dd0-4aee-b788-b1f67643a391.html'
|
|
||||||
IE = RaiPlayIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['it']))
|
|
||||||
self.assertEqual(md5(subtitles['it']), 'b1d90a98755126b61e667567a1f6680a')
|
|
||||||
|
|
||||||
|
|
||||||
class TestVikiSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://www.viki.com/videos/1060846v-punch-episode-18'
|
|
||||||
IE = VikiIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['en']))
|
|
||||||
self.assertEqual(md5(subtitles['en']), '53cb083a5914b2d84ef1ab67b880d18a')
|
|
||||||
|
|
||||||
|
|
||||||
class TestThePlatformSubtitles(BaseTestSubtitles):
|
|
||||||
# from http://www.3playmedia.com/services-features/tools/integrations/theplatform/
|
|
||||||
# (see http://theplatform.com/about/partners/type/subtitles-closed-captioning/)
|
|
||||||
url = 'theplatform:JFUjUE1_ehvq'
|
|
||||||
IE = ThePlatformIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['en']))
|
|
||||||
self.assertEqual(md5(subtitles['en']), '97e7670cbae3c4d26ae8bcc7fdd78d4b')
|
|
||||||
|
|
||||||
|
|
||||||
class TestThePlatformFeedSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://feed.theplatform.com/f/7wvmTC/msnbc_video-p-test?form=json&pretty=true&range=-40&byGuid=n_hardball_5biden_140207'
|
|
||||||
IE = ThePlatformFeedIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['en']))
|
|
||||||
self.assertEqual(md5(subtitles['en']), '48649a22e82b2da21c9a67a395eedade')
|
|
||||||
|
|
||||||
|
|
||||||
class TestRtveSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://www.rtve.es/alacarta/videos/los-misterios-de-laura/misterios-laura-capitulo-32-misterio-del-numero-17-2-parte/2428621/'
|
|
||||||
IE = RTVEALaCartaIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
print('Skipping, only available from Spain')
|
|
||||||
return
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['es']))
|
|
||||||
self.assertEqual(md5(subtitles['es']), '69e70cae2d40574fb7316f31d6eb7fca')
|
|
||||||
|
|
||||||
|
|
||||||
class TestDemocracynowSubtitles(BaseTestSubtitles):
|
|
||||||
url = 'http://www.democracynow.org/shows/2015/7/3'
|
|
||||||
IE = DemocracynowIE
|
|
||||||
|
|
||||||
def test_allsubtitles(self):
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['en']))
|
|
||||||
self.assertEqual(md5(subtitles['en']), 'acaca989e24a9e45a6719c9b3d60815c')
|
|
||||||
|
|
||||||
def test_subtitles_in_page(self):
|
|
||||||
self.url = 'http://www.democracynow.org/2015/7/3/this_flag_comes_down_today_bree'
|
|
||||||
self.DL.params['writesubtitles'] = True
|
|
||||||
self.DL.params['allsubtitles'] = True
|
|
||||||
subtitles = self.getSubtitles()
|
|
||||||
self.assertEqual(set(subtitles.keys()), set(['en']))
|
|
||||||
self.assertEqual(md5(subtitles['en']), 'acaca989e24a9e45a6719c9b3d60815c')
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user