html – Tea-Driven Development https://blog.mattwynne.net Matt Wynne taking it one tea at a time Wed, 21 Aug 2019 13:05:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 165828820 Fetch and Parse HTML Web Page Content From Bash. Wow. https://blog.mattwynne.net/2008/04/26/fetch-and-parse-html-web-page-content-from-bash-wow/ https://blog.mattwynne.net/2008/04/26/fetch-and-parse-html-web-page-content-from-bash-wow/#comments Sat, 26 Apr 2008 22:47:57 +0000 http://blog.mattwynne.net/2008/04/26/fetch-and-parse-html-web-page-content-from-bash-wow/ Continue reading "Fetch and Parse HTML Web Page Content From Bash. Wow."

]]>
Okay, this is another one of those linux newbie posts where I tried to figure out how to do something that’s probably really obvious to all you seasoned hackers out there.

Anyway here I go clogging up the internet with a post that somebody, somewhere will hopefully find useful.

Are you that person? Well… have you ever used the shell command curl to fetch a web page? It’s cool, isn’t it, but you do end up with a splurge of ugly HTML tags in your terminal shell:

Eugh!

So… how about we parse that HTML into something human-readable?


Enter my new friend, w3m, the command-shell web browser!

If you’re using OS X, you can install w3m using darwinports thusly:

sudo port install w3m

Linux hackers, I’m going to assume you can figure this out for yourselves.
So, with a brand-new blade in our swiss-army knife, let’s pipe the curl command into the standard input for w3m and see what happens:

Hmm… two problems here: because I’ve grabbed its output and piped it off to w3m, curl has started blethering on about how long it took. I can fix that with swift but ruthless the flick of a -s switch to silence it. How about all that raw HTML though – I thought this w3m thing was supposed to parse my html, not just regurgitate it?

It turns out that w3m assumes its input is of MIME-type text/plain, unless told otherwise. Let’s set the record straight:

Aw yeah. Now we’re talking. Old-skool green-screen meets nu-school interweb. It’s like being back on the BBS network of yore.

What’s the point of all this? Well, that’s up to you. I have a couple of ideas, but you’re going to have to start coming up with your own you know. Why are you reading this anyway? Haven’t you got anything better to do?

]]>
https://blog.mattwynne.net/2008/04/26/fetch-and-parse-html-web-page-content-from-bash-wow/feed/ 11 56
Saving Your WordPress Blog to CD https://blog.mattwynne.net/2008/04/11/saving-your-wordpress-blog-to-cd/ https://blog.mattwynne.net/2008/04/11/saving-your-wordpress-blog-to-cd/#comments Fri, 11 Apr 2008 10:42:04 +0000 http://blog.mattwynne.net/2008/04/11/saving-your-wordpress-blog-to-cd/ Continue reading "Saving Your WordPress Blog to CD"

]]>
So the wife has been writing her mandatory university course diary as a wordpress blog, but now she needs to hand it in.

> Can you put it on a CD for me?
She asks.

Unix to the rescue!

Following this excellent article I had the site saved down to disk in a jiffy, with all links modified to work offline, all images and CSS files copied down.

For your reference, here’s the command I used.

    wget --mirror -w 2 -p --html-extension --convert-links -P -H -Dwordpress.com ~/path/to/save/locally http://yourblog.wordpress.com

Quoting Jim’s article for the meaning of the command line options:
> –mirror: specifies to mirror the site. Wget will recursively follow all links on the site and download all necessary files. It will also only get files that have changed since the last mirror, which is handy in that it saves download time.
>
> -w: tells wget to “wait” or pause between requests, in this case for 2 seconds. This is not necessary, but is the considerate thing to do. It reduces the frequency of requests to the server, thus keeping the load down. If you are in a hurry to get the mirror done, you may eliminate this option.
>
> -p: causes wget to get all required elements for the page to load correctly. Apparently, the mirror option does not always guarantee that all images and peripheral files will be downloaded, so I add this for good measure.
>
> –html-extension: All files with a non-html extension will be converted to have an html extension. This will convert any cgi or asp generated files to html extensions for consistency.
>
> –convert-links: all links are converted so they will work when you browse locally. Otherwise, relative (or absolute) links would not necessarily load the right pages, and style sheets could break as well.
>
> -P (prefix folder): the resulting tree will be placed in this folder. This is handy for keeping different copies of the same site, or keeping a “browsable” copy separate from a mirrored copy.

I’ve also added my own at the end of Jim’s version:

-H -Dwordpress.com

These options tell wget to recursively fetch any file within the .wordpress.com domain – otherwise the stylesheets and images for the blog, which are stored in different subdomains of wordpress.com, will not be downloaded.

]]>
https://blog.mattwynne.net/2008/04/11/saving-your-wordpress-blog-to-cd/feed/ 9 47