without enertion extracting the data there is little hope
when i was backing up pages and pages of safety related information about axle stress that got deleted from this forum in the hummie kickstarter thread I used good old screenshots
This. Actually we should post a tutorial how to do that and pin it so everybody can help. Everything gets saved as HTML Page complete and then we can build a start page with links to the saved content.
edit: Iâm building a animated Tutorial rn, just in case someone else also is thinking about it
When I get back to my windows compter ill see if i can find a way to save it but no promises
- Scroll to the top
- Go to the address bar, scroll to the left (or jump to the end ;))
- type â?print=yesâ and press enter
- wait
- dismiss the print window
- choose âsave page asâ
- choose location, in case it isnât set change to âweb page completeâ in the lower right
- save
Does it save subpages?
Does anyone have experience with crawlers? I can volunteer large amounts f archive space
onloop is drinking whiskey and laughing everyone lol
Yes, everything is going perfectly in his world.
no, only the thread, it doesnât follow links or what do you mean by subpages?
Thanks for this
Saved a few of my threads, but my phone canât handle opening the big one in its entirety, hope the site stays up until I can get home
Well, shit, even printing has a limit
If you really want to get a backup, use this.
Keep in mind that it is massive and Enertion pays primarily for traffic, using this will up the traffic usage.
SoâŚeveryone needs to use this at the same time?
Are you sure it works for Discourse? Its a JS based forum, it dynamically loads content. I just tried httrack and the stuff it already got doesnât look promising.
Btw, my tutorial has a flaw, on huge threads theres a link ânext pageâ on the bottom, you have to follow that too and save again under a slightly different name (eg â_1â)
It works, Ive used it. Ive used it both in the past and currently on discourse installations.
LOL, prolly not
Did you use any special settings? Iâm using HTTraQT, a QT frontend to the tool.
Yeah dont do that you will DDoS the site lol
I used to have good results with Kali and using wget commands, you have to take it slow or use proxies or the site will ban your ip.
You can also use something like burpsuite free addition and crawl the site and scrape the urls
If the site goes down go here and you can download a lot of crawled webpages, there is software that will scrape that site and rebuild it best it can.
i didnt, using the win version if that matters.