So the wife has been writing her mandatory university course diary as a wordpress blog, but now she needs to hand it in.
> Can you put it on a CD for me?
Unix to the rescue!
Following this excellent article I had the site saved down to disk in a jiffy, with all links modified to work offline, all images and CSS files copied down.
For your reference, here’s the command I used.
wget --mirror -w 2 -p --html-extension --convert-links -P -H -Dwordpress.com ~/path/to/save/locally http://yourblog.wordpress.com
Quoting Jim’s article for the meaning of the command line options:
> –mirror: specifies to mirror the site. Wget will recursively follow all links on the site and download all necessary files. It will also only get files that have changed since the last mirror, which is handy in that it saves download time.
> -w: tells wget to â€œwaitâ€ or pause between requests, in this case for 2 seconds. This is not necessary, but is the considerate thing to do. It reduces the frequency of requests to the server, thus keeping the load down. If you are in a hurry to get the mirror done, you may eliminate this option.
> -p: causes wget to get all required elements for the page to load correctly. Apparently, the mirror option does not always guarantee that all images and peripheral files will be downloaded, so I add this for good measure.
> –html-extension: All files with a non-html extension will be converted to have an html extension. This will convert any cgi or asp generated files to html extensions for consistency.
> –convert-links: all links are converted so they will work when you browse locally. Otherwise, relative (or absolute) links would not necessarily load the right pages, and style sheets could break as well.
> -P (prefix folder): the resulting tree will be placed in this folder. This is handy for keeping different copies of the same site, or keeping a â€œbrowsableâ€ copy separate from a mirrored copy.
I’ve also added my own at the end of Jim’s version:
These options tell wget to recursively fetch any file within the .wordpress.com domain – otherwise the stylesheets and images for the blog, which are stored in different subdomains of wordpress.com, will not be downloaded.
Thank you for not only doing all the foot work, but posting it in one easy to follow place! You have saved me precious time scouring the web, so I must thank you!
Hi. the correct version is:
wget –mirror -w 2 -p –html-extension –convert-links -H -Dwordpress.com -P ~/path/to/save/locally http://yourblog.wordpress.com
The “-H -Dwordpress.com” must be before -P, ’cause otherwise the directory prefix is “-H”
i am desperately trying to download my blog to submit on a cd. it is wordpress, so the links are all absolute as you noted. i installed wget on my mac 10.4.11 intel, and the downloaded .html’s are not relative. any suggestions?
If you type ‘man wget’ you should get the manual for the command which will tell you what to do. Have you tried ‘–convert-links’ has helios commented above?
I get this error message when I try to access the article on site you mention above
You don’t have permission to access /articles/wget.html on this server.
Is there another way of accessing the article?
Sorry Rick, that guy seems to have taken his website down. Try googling I guess 🙂
Hi! I tried the suggested method, but I am not able to retrieve images and css: only one file is downloaded, named index.html and anything else!
What can I do? Am I doing something wrong? Thank you!
Leave a comment