Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[facebook] add support #5626

Merged
merged 41 commits into from
Nov 26, 2024
Merged

[facebook] add support #5626

merged 41 commits into from
Nov 26, 2024

Conversation

zWolfrost
Copy link
Contributor

@zWolfrost zWolfrost commented May 22, 2024

Fixes #470 and #2612 (probably duplicate)
For now it supports Account Photos & Photo Albums.
The only way it can work is by making one request per post, so it's not really optimized unfortunately.

@zWolfrost
Copy link
Contributor Author

zWolfrost commented May 23, 2024

It looks like Facebook blocks your account for about one hour when running the extractor for too many images.
It happened to me, after running the extractor and downloading 1800 of them.
Also, it only appears to be an account ban (logging out removes the block), and it only prevents you from viewing images by opening them from a link (opening them using the react UI works).
It's probably best if the extractor actively avoids using the imported cookies, unless requested otherwise (with proper warnings).
Please let me know your thoughts.

@zWolfrost zWolfrost marked this pull request as ready for review May 31, 2024 14:20
@MrJmpl3
Copy link

MrJmpl3 commented Jun 13, 2024

It's probably best if the extractor actively avoids using the imported cookies, unless requested otherwise (with proper warnings).

I think photos and videos have a signature in the url, Facebook maybe can track you and ban using this info.

@zWolfrost
Copy link
Contributor Author

I think photos and videos have a signature in the url, Facebook maybe can track you and ban using this info.

I don't think i understand how you would be able to ban someone using a signature in the photo url. I think the most reasonable option would just be to use the request cookies (which include your account ids and such) to account-ban you.

As i mentioned, it's not really a complete ban, it's only limited to some parts of the UI, and logging out (thus not sending the account request cookies) does remove the block, with the tradeoff that you can't view private or R-18 images.

I still don't know if being logged out in advance prevents the ban altogether. If that's the case then i think i will add a warning about that.

also added author followups for singular images
@zWolfrost
Copy link
Contributor Author

After doing some more testing i can tell that not using cookies still gets you blocked from viewing images, in the sense that you are forced to log in, and it happens much faster than when using them.
I think it's best not to use cookies unless facebook forces the user to login, and print a small warning whenever the extractor uses them. Doing this i could extract about 2400 images before getting temporarily blocked, and i think that's pretty good.

Also, I'd love to know if the extractor works for anyone else other than me, so please feel free to let me know.

@AdBlocker69
Copy link

Hi, I've tested your version and it seems to be working fine for pictures. I'm planning to save quite a few images from a public Facebook page and was wondering if using one of the --sleep commands could circumvent you from being blocked (or if Facebook just reacts to an arbitrary number of request, no matter the frequency). And just overall - does that mean I'm generally unable to connect to the Facebook services (like using gallery-dl with it) or does it just prevent browser/account interaction?
Let me not forget - thanks for your work. I hope this gets implemented in the official project soon. Facebook is (still) such a big platform; so having a tool like gallery-dl supporting it is pretty important (imo)...

Facebook video support would be nice too: Luckily in my case they weren't that many so I was able to download them one by one with yt-dlp... ...But they also don't support Album/Account video downloading (yet) like they do with YouTube for example.

@zWolfrost
Copy link
Contributor Author

Hi, thank you for your feedback.

I'm not sure if waiting to continue extracting would work, and if it did work, i have no idea for how long or after how many images the wait should start. That would require a lot of testing and unfortunately every time i get blocked i have to wait about 6 hours to try again.

To be more specific, the "block" which I'm talking about only prevents you from accessing images by their URL (the way the extractor does it), but you can still access them by using the React user interface.
That means, accessing them by clicking them, scrolling with the arrows etc., but if, for example, you reload the page while viewing one, an error pops up about you "using this feature too fast".
When not using an account, instead, you don't get the error but you get redirected to the login page (aside from that the behavior is the same though).

As far as i can tell this block is only limited to this and you can do any other thing on Facebook

About the video support, i will keep that in mind. I'm not sure how yt-dlp downloads them and i will check that out when i have time.

@AdBlocker69
Copy link

Thanks for the info :)
Btw, do you know what to do when let's say, you have downloaded 2400 pictures and get blocked afterwards, and want to continue downloading from the same profile; can you just continue after the 6 hours because I guess when checking for duplicates gallery-dl still does requests for the 2400 images (as it goes chronologically from newest to oldest post) or does that work differently?

@zWolfrost
Copy link
Contributor Author

No, I'm sorry, once you get blocked the photo page doesn't load at all (assuming you're loading it by its URL) so there is no way to know the metadata and stuff. This is the reason why I just added a way to continue the extraction from an image in the set/album instead of having to start from the beginning. Just take the photo URL and add "&setextract" to it to download the whole set from there instead of the photo alone. The user will be prompted with this URL if they get blocked while extracting

@AdBlocker69
Copy link

Good idea for implementing that 👍🏻 Does it only work with the prompted URL because I just tried it up front by using an image link and adding "&setextract" to it but it gave me an 'unknown command' error after downloading just the singular image?

Also, it seems like your video extraction only pulls the version without audio (the best one in the list of formats in yt-dlp but it gets merged with an audio-only version there by default)... So it would be best to either add the ffmpeg merge-by-default or have it select the "hd" format by default which has video+audio.

@zWolfrost
Copy link
Contributor Author

zWolfrost commented Jun 18, 2024

The "&setextract" feature didn't work to you because you probably passed it to gallery-dl without using the double quotes (") and the command prompt recognized the "&" as the split character between two commands (you can use the ampersand to execute two commands in one line). That would also explain why it downloaded the image, and then gave you an "unknown command" message, as you probably don't have a command assigned to the "setextract" keyword

By the way, after further inspection, I don't think there's a way to make an "all profile videos" extractor, as they don't share a set id i can use to navigate though them all.

Good catch for the audio thing though, i wasn't wearing headphones :)
I have fixed it right now, the audio gets downloaded as well. By default they will be separate, to merge the two you'll have to let youtube-dl/yt-dlp handle the download, by adding "videos": "ytdl" in the facebook extractor configuration

@AdBlocker69
Copy link

AdBlocker69 commented Jun 18, 2024

Okay, several things I just realized by doing some trial and error 😅:
First of all, I need to use the full link for gallery-dl to detect what "set" I even want to continue downloading from; I had just used the short version (as marked blue) and wondered why it didn't do anything anymore after downloading the image the link directs to.

ss

Secondly, I need to then put the full link into quotation marks since it otherwise, as you said, detects the text after the ampersand as a secondary command (as marked red), giving me a 'syntax error' and not processing the rest further after downloading the image the link directs to.

ss2

So this is how it's to be written to avoid any errors due to the link format and command logic etc.

ss3

Now I got it to work successfully 👌🏻

Alternatively you can also just add the set id given to you by the previous downloading as it is in the folder name where your set images were saved to (e.g. the 'Timeline photos' set (all account images)) manually after the 'short' image link with the addition of "&set=" in front of it:

ss4

@zWolfrost
Copy link
Contributor Author

I'm sorry if things got confusing 😥 at least you managed to make it work now.
Of course, someone who would've had their download blocked would have been prompted with the full URL already so there is no chance of this happening in a real situation (at least i hope so).
I will see if there is a way to maybe get the set id by inspecting the photo page by itself (if i remember correctly there should be a default one)

@AdBlocker69
Copy link

AdBlocker69 commented Jun 18, 2024

No problem, that's just what happens when the outgoing situation is slightly different :)
I just like to use short links as sometimes, when taking them directly from browsing the web, they have some certain parameters in them (like a less than source size of an image etc.) which are undesirable. So it's more or less best practice for me to take the link as 'raw' as possible to avoid any of that.
In this case the extra information given in the link was vital though...

@zWolfrost
Copy link
Contributor Author

zWolfrost commented Jun 18, 2024

There, i just had to change the matching URL pattern a little. Now it works even without including the set id. Hopefully it's the same for you. I recommend you to avoid this anyway, as Facebook acts a little weird when you navigate images without their set id. Sometimes their sequence gets changed or some images get skipped altogether. Or maybe it just works fine and i unintentionally bugfixed it some while ago, i don't know.

@AdBlocker69
Copy link

AdBlocker69 commented Jun 20, 2024

Works 🤙🏻
Thanks; I guess it's generally helpful to have it work like that too for when you maybe just have the image link from a 3rd source of whatever and want to download from that point back - and you then don't have to specifically go back to the profile page itself to find the set id.
So quality of life wise good, no question.

the extractor should be ready :)
@fireattack
Copy link
Contributor

fireattack commented Oct 13, 2024

Sorry if this has been mentioned; but this seems to only be able to process the user, than the single post (https://www.facebook.com/{username}/posts/{posthash})

@zWolfrost
Copy link
Contributor Author

@fireattack I just fixed it. Let me know if now it works for you.

@fireattack
Copy link
Contributor

I now get error:

>python -m gallery_dl "https://www.facebook.com/joho.press.jp/posts/pfbid02mfFRpVkErLQxQ8cpD2f1hwXEVsFzK8kfNBKdK2Jndnx6AkmMQZuXhovwDgwvoDNil" 
[facebook][error] HttpError: '404 Not Found' for 'https://www.facebook.com/media/set/?set='

@zWolfrost
Copy link
Contributor Author

@fireattack The problem is that the extractor can't really get all the images in the posts. You would have a better chance copying the set id of one of those images, which is in their urls (in your case it's pcb.1160563418981189) and giving the set page to gallery-dl (https://www.facebook.com/media/set/?set=pcb.1160563418981189). Right now I'll see if I can do something about it (as well as fixing the error).

@zWolfrost
Copy link
Contributor Author

Right now the extractor should get the first set id it can find in the post and try to extract that set. Sometimes the set contains way more images than the post, but you could still quit it when it's done I guess. I could technically make it quit by itself when it has extracted the same number of images in the posts (36+4) but I'm not sure if I want to.

@mikf
Copy link
Owner

mikf commented Nov 25, 2024

I'd like to finally review and merge this PR to include it in the v1.28.0 release. Sorry for taking forever to get to this.

One of the general criticisms I have is your frequent use of .* and .*? in URL patterns. They should all be replaced with something more restrictive, at least [^/?#]+ to not go over "URL boundaries". Are the URLs in test/result/facebook.py all that should be matched? If not, could you post a complete list of possible URLs and/or add them as only-matching entries to the results list?

@zWolfrost
Copy link
Contributor Author

The URLs in the test script should be the complete list of all of them. The only exception is the facebook.FacebookProfileExtractor, which is kind of a "fallback" for the others, and would work for all instances of a facebook link that starts with "https://www.facebook.com/[username]/..." (and also "profile.php?id=" links which I added now in the tests). I made it like this because there are many account URLs that one would expect to work and extract the whole account when passed to gallery-dl. Of course i can change this if you want to, along with the other things.
Now that we're at it, do you happen to know why the archive ids doesn't seem to have the "extension" part of the format in the name? (thus failing one of the tests?)

docs/configuration.rst Outdated Show resolved Hide resolved
docs/configuration.rst Outdated Show resolved Hide resolved
docs/configuration.rst Show resolved Hide resolved
gallery_dl/extractor/facebook.py Outdated Show resolved Hide resolved
gallery_dl/extractor/facebook.py Outdated Show resolved Hide resolved
gallery_dl/extractor/facebook.py Outdated Show resolved Hide resolved
gallery_dl/extractor/facebook.py Outdated Show resolved Hide resolved
gallery_dl/extractor/facebook.py Outdated Show resolved Hide resolved
gallery_dl/extractor/facebook.py Outdated Show resolved Hide resolved
gallery_dl/extractor/facebook.py Outdated Show resolved Hide resolved
@mikf
Copy link
Owner

mikf commented Nov 25, 2024

Now that we're at it, do you happen to know why the archive ids doesn't seem to have the "extension" part of the format in the name? (thus failing one of the tests?)

The formatter used for archive IDs during tests ignores extension values for ... reasons (probably to deal with undefined extension values, I don't remember), resulting in 644342003942740. as archive ID for both video and audio, but just during tests.

b1985d6#diff-e74101faac0b9c340fefef8dd3418e77d07cf8ce4df6a1c5110c91853804d876R250

Just add "#archive": False to the test case for the time being.

@zWolfrost
Copy link
Contributor Author

zWolfrost commented Nov 25, 2024

I have written "most of the requested changes" in the commit message because I can't really replace the .* and .*? URL patterns with [^/?#]+ most of the times because they are often intended to skip over URL paths. I have changed the other stuff.

zWolfrost and others added 3 commits November 26, 2024 14:30
- more 'Sec-Fetch-…' headers
- simplify 'text.nameext_from_url()' calls
- replace 'sorted(…)[-1]' with 'max(…)'
- fix '_interval_429' usage
- use replacement fields in logging messages
get rid of '.*' and '.*?'
@mikf
Copy link
Owner

mikf commented Nov 26, 2024

I can't really replace the .* and .*? URL patterns with [^/?#]+ most of the times

Seems to work just fine: cb286cb
All tests still pass, but let me know if these changes caused anything not included in the tests to fail.

@zWolfrost
Copy link
Contributor Author

zWolfrost commented Nov 26, 2024

I forgot to tell you about the "&setextract" function, where if a photo URL ends with "&setextract" the extractor will instead go through the whole set starting from that photo. This is to avoid having to start from the beginning after being temporarily banned. I have added the remaining tests and adjusted the concerned patterns. Everything should work now.
(Note: this only works with a specific photo url pattern, which is directly given by the extractor when appropriate)

@mikf mikf merged commit e9370b7 into mikf:master Nov 26, 2024
10 checks passed
@MikeRich88
Copy link

MikeRich88 commented Nov 30, 2024

I'd like to test this, but the nightly builds are no good for Intel macs :(

Is there some reason it's not being built as universal? You can build one arm64 and one x86_64 and then lipo them together.

QuinnHex:Desktop mike$ chmod +x /Users/mike/Downloads/gallery-dl_macos
QuinnHex:Desktop mike$ /Users/mike/Downloads/gallery-dl_macos
-bash: /Users/mike/Downloads/gallery-dl_macos: Bad CPU type in executable
QuinnHex:Desktop mike$ file /Users/mike/Downloads/gallery-dl_macos
/Users/mike/Downloads/gallery-dl_macos: Mach-O 64-bit arm64 executable, flags:<NOUNDEFS|DYLDLINK|TWOLEVEL|PIE>

Edit: So you can get the nightlies through pip, but, for some reason, if you try to do anything like python3 -m pip install blahblah it aborts with error: externally-managed-environment and says to do brew such and such.

Instead, you've got to use pip3 and not python3 -m pip.

So pip3 install -U --force-reinstall --no-deps https://github.com/mikf/gallery-dl/archive/master.tar.gz and now I've got the nightly version.

@MikeRich88
Copy link

MikeRich88 commented Nov 30, 2024

Was able to download a friend's photos successfully, although they only have 6 photos ;)

EDIT: Got another friend, 329 photos. Don't seem to be banned or anything. Is there like a 1 second delay in the code already, or is FB just slow?

@zWolfrost
Copy link
Contributor Author

@MikeRich88 I'm glad to hear that it works for you. To answer to your question, the extractor has to make a request to a webpage before every image (+one at the beginning) that are really expensive, to get the necessary data. Just one request is ~1.5MB so yeah, unless Facebook optimizes his pages the delay will always be there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Status: Pull Requests
Development

Successfully merging this pull request may close these issues.

[Request] site support: facebook.com
7 participants