12/10/2009

DD-WRT to upgrade your router with open-source router OS

High-end router hack with cheap router from lifehacker:
http://lifehacker.com/software/router/hack-attack-turn-your-60-router-into-a-600-router-178132.php

Quote:

Hack Attack: Turn your $60 router into a $600 router

by Adam Pash

Of all the great DIY projects at this year's Maker Faire, the one project that really caught my eye involved converting a regular old $60 router into a powerful, highly configurable $600 router. The router has an interesting history, but all you really need to know is that the special sauce lies in embedding Linux in your router. I found this project especially attractive because: 1) It's easy, and 2) it's totally free.

So when I got the chance, I dove into converting my own router. After a relatively simple firmware upgrade, you can boost your wireless signal, prioritize what programs get your precious bandwidth, and do lots of other simple or potentially much more complicated things to improve your computing experience. Today I'm going to walk you through upgrading your router's firmware to the powerful open source DD-WRT firmware.

What you'll need:

  1. One of the supported routers. I used a Linksys WRT54GL Wireless router that I picked up from Newegg, and the instructions that follow detail the upgrade process specifically for that router and its close siblings. If you're upgrading one of the other supported routers, you might want to look into instructions specific to your router. These instructions may generally work for other supported routers, but I'm not making any promises.
  2. The generic DD-WRT v23 SP1 mini firmware version located here.*
  3. The generic DD-WRT v23 SP1 standard firmware version located here.*

*You'll be upgrading the firmware twice, first using the mini firmware, then using the standard.

Upgrading your router to the DD-WRT firmware

Check out this gallery for the detailed step-by-step upgrade with screenshots. When you're finished, come back here for some of my favorite tweaks.

Update, October '07: Reader Josh Harris writes in:

All the new WRT54G routers being sold now are v8, and the previous DD-WRT software didn't work on them. However, recent versions added support for the new v8 router— but it's a little more in depth.

Got this to work on the WRT54G v8 (should work on 7 as well, just replace the files with the corresponding 7 version):

First of all, use IE explorer. Firefox didn't work at all on this for me, even after install. Second, go to this page. Read the textfile carefully and follow its instructions. Two edits to the textfile:

1. Make sure you go to command prompt and type ipconfig /all. Record the default gateway, the subnet mask, and the two DNS addresses. When you set the IP address manually on your desktop/laptop to 192.168.1.100 as per the instructions, you will need to set these 4 numbers as well.

2. Don't forget when you do the tftp that you need to be in the folder that contains the downloaded dd-wrt.v24_micro_wrt54gv8.bin file (for example, if it is in C:/Downloads, type /cd C:/Downloads).

Lastly don't forget you need to be on a wire to the router, and download both vxworkskillerGv8.bin and dd-wrt.v24_micro_wrt54gv8.bin before you start. Following this procedure will install the micro version on your router.

After this, switch your laptop/desktop back to receiving your IP address via DHCP rather than the manual configuration you set as per the instructions. You will be able to access the DD-WRt micro install via 192.168.1.1 with the login username root and the password admin. From here, you still need to install the DD-WRT standard.

Unfortunately, you cannot go any farther than this with WRT54G v7 and v8 because Linksys downgraded the physical memory in these recent models. However, micro is still an improvement over the original Linksys firmware.

Boost your wireless signal

The first thing I did after I finished the firmware upgrade was give my wireless signal a much needed boost ("needed" in the sense any signal boosting that can be done needs to be done, right?). Doing so is trivial.

Go to the Wireless tab, then to Advanced Settings. Find the entry labeled Xmit Power, which is set by default at a paltry 28mW, and can be set up to 251mW. To be honest, I don't know much about the science of the whole process, but I do know that 251 is WAY bigger than 28. However, you probably don't want to pump it up to 251mW right away.

The DD-WRT manual suggests that a "safe increase of up to 70 would be suitable for most users." Anything too much above that and you'd be flirting with overheating your router and damaging the life of your router (though I've heard that many people have pushed it up to 100 or above). So go ahead and change your Xmit Power to 70 and click the Save Settings button at the bottom of the page.

I can't measure for sure how the signal boost has improved things for me since I've just moved into this apartment, but I can say that the signal is full bars pretty much anywhere I go. How's that for scientific?

Throttling your bandwidth by program

While most routers treat one request for bandwidth the same as any other, your new $600 router is a step above. By setting up QoS (Quality of Service) rules, you can give priority to your interactive traffic (like VoIP, web browsing, or gaming) while throttling traffic that doesn't require a steady rate of bandwidth to function (like P2P programs).

Doing so will ensure that even if your network gets clogged with lots of file sharing, you'll still have enough bandwidth left over to make all of your free SkypeOut phone calls. If you've got roommates who tend to sponge up a lot of bandwidth, you can even prioritize by IP address.

What to do if you brick your router

brick.png

If, god forbid, while flashing your firmware you end up "bricking" your router, don't worry - all is not lost. The DD-WRT wiki (a great resource of all things DD-WRT) can help you recover from a bad flash.

Of course, your router will handle securing your network, port forwarding, and all the other things your regular old router does.

Obviously I've just scratched the surface here, so if you decide to try this out, there's a lot of potential for other things you can do. Any readers tricked out a router with DD-WRT or one of the other open source distros? Tell us what tweaks have worked for you in the comments or at tips at lifehacker.com.

XBMC on Media Center!

XBMC build instruction from lifehacker:
http://lifehacker.com/5391308/

Quote:

Build a Silent, Standalone XBMC Media Center On the Cheap

You won't find a better media center than the open-source XBMC, but most people don't have the space or desire to plug a noisy PC into their TV. Instead, I converted a cheap nettop into a standalone XBMC set-top box. Here's how.

In the spirit of our Winter Upgrades theme this week, this guide details how to turn a cheapo nettop (think netbook for the desktop) into a killer settop box running XBMC. It handles virtually any video file I throw at it with ease (including streaming Blu-Ray rips from my desktop), it looks tiny next to my Xbox 360, it's low energy, and it's whisper quiet.

Huge props to this guide on the XBMC forums, which served as the starting point for much of what I did below.

What You'll Need

  • Acer AspireRevo: This $200 nettop ships with 1GB of RAM, an Intel Atom 230 processor, 160GB hard drive, Windows XP (which we won't use anyway), and an integrated graphics chip that handles HD video and can output it to HDMI. It also comes with a small wired keyboard and mouse, but once you're done here, you shouldn't need either of them. Oh, and it's tiny. (Other, more powerful nettops will work [like this one's beefier, $330 older sibling], but this is the cheapest one I could find with the NVIDIA ION graphics powerful enough to handle the HD playback.)
  • XBMC Live: This is a Live CD version of XBMC that boots directly into XBMC and has a tiny footprint. Basically all you're running is XBMC, so your media center stays light and snappy. You can find the download specifically set up for these NVIDIA ION machines on this page, you can grab the direct download here, or download via BitTorrent here.
  • A thumb drive: It doesn't have to be huge, but it'll need to be at least 1500MB of capacity, according to the installer. You should also format it to FAT32.
  • An IR receiver/Windows Media Center remote: This isn't strictly necessary, but if you want to control your shiny new XBMC via remote control, you'll need some sort of supported remote with a USB receiver. I bought this $20 remote because it was the cheapest I could find. (Incidentally, it also works like a charm with XBMC as soon as you plug it in.)

Getting XBMC Live up and running on your nettop is a breeze if you follow a few simple steps, so let's get started.

Install XBMC Live on Your Thumb Drive

XBMC Live allows you to try XBMC on any computer from a bootable CD or thumb drive, then optionally install the lightweight, XBMC-focused Linux distro directly to your device if you like. Since our nettop doesn't have a DVD drive, we'll need to first install XBMC to our thumb drive.

(There are ways around this. If you had a USB optical drive, you could probably burn XBMC Live to a disc and go from there. The thumb drive method isn't much more difficult, though.)

Here's how it works:

1. Download the XBMC Live installer with the updated NVIDIA drivers included on this page (direct link, torrent link). Update: Huge thanks to Mike and Aaron for the file hosting and torrent creating. It's a 341MB file, so it may take a while.

2. Burn XBMC Live to a CD
Once the download completes, unzip the xbmc.zip file. What you're left with is an xbmc.iso file—the disc image of the XBMC Live installer. Now you need to burn this ISO to a CD. Install our favorite tool for the job, ImgBurn, then right-click the xbmc.iso file and select Burn using ImgBurn. (If you're running Windows 7, you can use its built-in ISO burner, too, by selecting Burn disc image.)

3. Install XBMC Live to Your Thumb Drive
Now that you've burned XBMC to a CD, you're ready to install it to your thumb drive. To do so, plug in your thumb drive, put the XBMC Live CD in your DVD drive, and reboot your computer. If it's not already your default setting, go into your system BIOS (for most computers hitting Delete at the first boot screen will launch your BIOS) and set your optical drive as the primary boot device.

(All this means is that when your computer boots, it'll first check to see if there's any bootable media in your optical drive. If not, it'll continue booting to your secondary device—generally your hard drive. If your optical drive does contain bootable media—like your XBMC Live CD, for example—it'll boot it up.)

When XBMC Live loads, select "Install XBMCLive to disk (USB or HDD)", then accept the first prompt (by pressing any key). Next you'll end up at the "Choose disk to use" prompt, where you'll tell the installer that you want to install to your USB stick. Be careful here not to choose your hard drive, because it would be happy to overwrite your operating system if you told it to. Remember, your thumb drive is the Removable disk. After you pick the disk you want to use, confirm that you want to proceed and let the installer do its magic. (It'll only take a few minutes.)

Eventually the installer will ask you if you want to create a permanent system storage file, which presumably you'd want if you're not sure whether or not you want to install XBMC Live to your Acer's hard drive. I went ahead and created the system storage (even though we'll install XBMC Live directly to the hard drive in the next step.) Once the installation finishes, you're ready to proceed to the next step.

Set Your System BIOS

You'll need to make a couple of tweaks to your system BIOS to get it working smoothly with XBMC Live. So plug in your thumb drive, boot up your Acer AspireRevo, and hit Delete at the first boot screen to edit your BIOS. Look for the "Boot to RevoBoot" entry in the Advanced BIOS features menu and disable it. While you're there, set your XBMC Live thumb drive as the primary boot device. (You can set the primary boot device back to your hard drive later, after you've installed XBMC Live to your drive.)

Then go to the Advanced Chipset Features menu and set the iGPU Frame Buffer Detect to Manual and set the iGPU Frame Buffer Size to 256MB. (This is detailed here; the actual guide says 512, but that requires that you install more RAM—something I may do in the future, and will detail with a guide if I do. The 512 buffer size will help you stream the larger HD videos without hiccups.)

Now that your BIOS are set, you're ready to try out XBMC Live on your Acer AspireRevo.

Boot Up/Install XBMC Live to Your Hard Drive

At this point, you've got two choices. You can either restart your Acer AspireRevo and boot into XBMC Live to play around a little before you install it to your disk. If you're sure you're ready to install it for reals, just go ahead and run through the exact same installation as you did above, only this time install it to your nettop's hard drive. When you install to the hard drive, you'll also set a system password. This'll come in handy later.

The Final Tweaks

Okay, so far so good. XBMC should boot up directly from your hard drive now, and if you've plugged in your Windows Media Center remote, it should also be working without a hitch. You've just got to make a couple of adjustments to make it shine.

Now, what makes your little nettop work so well is that its onboard graphics processor can handle all the HD business without eating up your regular processor power, so you'll want to enable this in the XBMC settings. So head to Settings > Video > Play, find the Set Render to section, and set it to VDPAU.

Now, depending on how you're planning on hooking up your XBMC Live box to your television, you've got a few more tweaks you'll want to make. Namely this:

If you want to use HDMI for your audio out, head to Settings > System > Audio hardware, then set the audio output to Digital. Set your Audio output device to hdmi, and set the Passthrough output device to hdmi. Last, enable Downmix multichannel audio to stereo.

If you are using HDMI as your audio out (I am, and it's pretty nice), you've got to make one final tweak if you want the audio output to work with menu sounds. (It'll work fine with video without making this tweak, but the click-click sounds that play when you move around the XBMC menu are nice to have.) If that applies to you, create a new text file on your regular old computer (name it asoundrc.txt) and paste the following code (again, this awesome tweak comes from this post):

pcm.!default {
type plug
slave {
pcm "hdmi"
}
}

In the next step, I'll show you how to copy that file over to your nettop (a little trick that'll also come in handy for manually installing plug-ins and copying files to your nettop).

SFTP to Your XBMC Box

If you want to transfer files to your XBMC Live box from another computer, you'll need to get yourself an FTP client (I like FileZilla) and log into your nettop with the password you set when you were installing XBMC Live. To do so, create a new connection in Filezilla that looks something like the screenshot below (the default user is xbmc).

Once you're connected, make sure you're in the /home/xbmc/ directory, then copy over the asoundrc.txt file we made above. (The one you want to use if you're running your audio through the HDMI output.) Once it's copied over, rename the file to .asoundrc, restart XBMC, and the click-click menu sounds should be working along with regular old A/V playback.

The same SFTPing method here will be useful if you ever want to manually install any plug-ins or skins down the road, or just copy over media directly to your nettop's hard drive. (Though we'd recommend streaming—see below.)

Other Options

As I said above, you can buy more expensive, meatier machines, but for my money this Acer nettop has worked perfectly. Apart from upgrading to better equipment, you can also add up to 2GB more RAM if you're up for the job (RAM's so cheap these days, anyway). Like I said, though, so far I haven't seen the need for it.

I also quickly switched the skin to the MediaStream skin, which is the one you see in the photo at the top of the page. For a look at some other great skins you may want to apply to your XBMC box, check out these five beautiful skins—or just head to XBMC's main skins page.

Now that you've got it all set up, you've probably also realized that 160GB isn't all that much space for your media. You'd be right, of course. You've got two pretty good options. First, the nettop should have something like four free USB ports still, so you can easily plug in a big old drive that way. Assuming, however, that you can run an Ethernet wire over to your nettop, your best option is just to connect it to a shared folder on your home network. XBMC works like a charm with Samba shares (Windows shared folders use this).

Whichever method you use, you just need to add your extra hard drive space as a source in XBMC. You can do so through any of the individual menu items (videos, for example), or you can add a default Samba username and password in the settings so it can connect automatically without asking for a password each time you add a new watch folder on that machine.

At this point I could go into more detail on how to use and get the most out of XBMC (it can be a little hard to get your head around at first, even though once you do, it's not actually confusing). We've covered souping up your XBMC—and building your classic Xbox XBMC machine—and both offer some help in those directions. But stick around; tomorrow we'll follow up with an updated guide to some of our favorite XBMC tweaks.

Solution to deal with trouble with your boss

If you have trouble with your boss consider this article:
http://www.lifehack.org/articles/management/what-to-do-if-you-dont-get-along-with-your-boss.html

Quote:

What to Do if You Don’t Get Along with Your Boss

20091204-frustrated

What should you do if you really cannot get on with your boss at work? Maybe there has been a breakdown in trust, in communication or in respect. In any event it is ruining your time at work and making you frustrated and unhappy. Let’s call your manager “John” and see how we can approach the situation. (The advice here works equally well whether your boss is a man or a woman).

1. How do other people find him? Does everyone have a hard time with John or is it just you? Check out how other people get on with him by asking subtle questions – do not rant about how awful he is and see if others agree. If everyone has a problem with him then you have some common ground on which to work. If only you have difficulties with him then you need to examine yourself and your relationship with him.

2. Ask yourself why. List all the reasons why you think things are not working between you. There are probably some big assumptions on your list so you will need to validate them carefully.

3. Have a heart to heart meeting. Schedule a time to meet John when he is not under pressure. Tell him that you want to discuss some important issues. At the meeting explain very calmly and rationally that you do not feel the relationship is working well and that you would like to explore why and how to improve it. Do not go into a long list of complaints and sores. Take a factual example if you can and start from there. Let him do most of the talking. Try to see the situation from his point of view and understand exactly what he sees as the issues. See how many of the problems you listed at point 2 above are real.

4. Agree an action plan. If you can agree a plan for outcomes that you both want then it really helps. What is it that he wants you to achieve? If you deliver it will he be happy with your performance? Even if you disagree on all sorts of other things try to agree on what your key job objectives are. Ideally you should agree actions that each of you will take to improve the working relationship.

5. Try to understand his objectives and motivation. Even if John is lazy, dishonest and spiteful you can still find out what he is keen to achieve and work with him towards his goals. If you can find a way to help him with his objectives then maybe you can work around his faults. A good rule at work is to help your boss to succeed – whether you like him or not. Other people will see you do this and it works to your credit – especially if they know that your boss is difficult.

6. Go over his head. This is a risky option but sometimes it is necessary – especially if most other people share the same problems with John. Have a quiet word with your boss’s boss and say that you feel that the department is not achieving all that it could. Make some broad suggestions about how things could be improved without making direct accusations against John. Let the senior manager read between the lines; he or she probably knows already.

7. Move sideways in the organization. If you cannot move up then move across for a while. Get some experience in another department. Eventually John will move on, be fired or quit. If you are seen as a positive contributor then you may get your chance to do John’s job better than he did.

8. Quit. Life is too short to spend it in a job that makes you miserable. If you have tried all of the routes above and are still blocked and frustrated then find a job elsewhere. There are plenty of good bosses who want enthusiastic and diligent people to work for them.

Sooner or later most of us will get a difficult boss to deal with. Do not become sullen or aggressive. The trick is to figure out a way to get on with the boss in a manner that helps both of you. It can nearly always be done.

11/29/2009

Time Management Hack: Fixed Schedule

I love Ramit's blog and what he finds. Here's the article that he found and track from studyhack: Time management: How an MIT postdoc writes 3 books, a PhD defense, and 6+ peer-reviewed papers — and finishes by 5:30pm

Quote below:

I’m always on the lookout for “hidden gems,” or people who are doing remarkable work that the whole world hasn’t caught on to, yet.

Today, I asked my friend Cal Newport to illustrate how he completely dominates as a post-doc at MIT, author of multiple books, and popular blogger. How does he do it all?

Cal writes one of the best blogs on the Internet: Study Hacks. His guest post shows how you can take I Will Teach You To Be Rich principles — plus many others — and integrate them into a way to use your time effectively.

Below, you’ll learn:

  • How to use fixed-schedule productivity — similar to the Think, Want, Do Technique — to consciously choose what you want to work on and ignore worthless busywork
  • When to say no — and how to do it
  • How a $60,000-a-speech professional manages his time
  • Case study: How to use email for maximum time productivity

Read on.

* * *

From Cal:

I recently conducted a simple experiment: I recorded the timestamps of the last 50 e-mails in my sent messages folder. These timestamps covered one week of my e-mail behavior, starting on Thursday, October 22nd and ending Thursday, October 29th.

My interest was to measure when during the day I spent time on e-mail. Here’s what I found:

emailchart2

Notice that over this week-long period, I didn’t send any e-mail after 7:00 pm, and only one e-mail after 6:00 pm. There’s a good explanation for this discipline: I end all work around 5:30 every day. No Internet. No computer. No to-do lists. Once I shutdown my day, it’s time to relax.

I must emphasize that I’m not some laid-back lifestyle entrepreneur who monitors an automated business from a hammock in Aruba. I have a normal job (I’m a postdoc) and a lot on my plate.

This past summer, for example, I completed my PhD in computer science at MIT. Simultaneous with writing my dissertation I finished the manuscript for my third book, which was handed in a month after my PhD defense and will be published by Random House in the summer of 2010. During this past year, I also managed to maintain my blog, Study Hacks, which enjoys over 50,000 unique visitors a month, and publish over a half-dozen peer-reviewed academic papers.

Put another way: I’m no slacker. But with only a few exceptions, all of this work took place between 8:30 and 5:30, only on weekdays. (My exercise, which I do every day, is also included in this block, as is an hour of dog walking. I really like my post-5:30 free time to be completely free.)

I call this approach fixed-scheduled productivity, and it’s something I’ve been following and preaching since early 2008. The idea is simple:

  • Fix your ideal schedule, then work backwards to make everything fit — ruthlessly culling obligations, turning people down, becoming hard to reach, and shedding marginally useful tasks along the way.

The beneficial effects of this strategy on your sense of control, stress levels, and amount of important work accomplished, is profound.

The notion is not new. Tim Ferriss famously recommend strict time constraints in The 4-Hour Work Week. He argued that much of the work we do is of questionable importance and conducted at low efficiency. (He made a popular — if not somewhat dubious — appeal to Parkinson’s Law to support the point that more time does not necessarily lead to more results.) If we instead identify only the most important tasks, he said, and tackle them under severe constraints, we’d be surprised by how little time we actually require.

In this article, I want to tell the stories of real people who successfully implemented this strategy – radically improving the quality of their lives without scuttling their professional success.

Jim Collins’ Whiteboard

Jim Collins’ Whiteboard (Photo by Kevin Moloney for The New York Times)

(photo by Kevin Moloney for The New York Times)

Jim Collins has sold over seven million copies of his canonical business guides, Good to Great and Built to Last. He attributes the success of these books to his research discipline. As he revealed in a New York Times profile from last May, he leads teams of up to a dozen undergraduates in the process of information gathering. His books require, on average, a half-decade of time and a half-million dollars of expenses to get from their initial premise to the polished ideas. When he enters his “monk” mode to covert this research into a manuscript, he produces, at best, a page a day.

In other words, Collins is a hardworking guy. You would expect, therefore, that like many hard-charging business-world types he would be a blackberry-by-the-bedside workaholic.

But he’s not.

Scrawled on a whiteboard in the conference room of Collins’ Boulder, Colorado office is a simple formula:

Creative 53%
Teaching 28%
Other 19%

Collins decided years ago that a “big goal” in his life was to spend half of his working time on creative work — thinking, researching, and writing — a third of his time on teaching, and then cram everything else into the last 20%. The numbers on the whiteboard are a snapshot of his current distribution. (He tracks his time with a stop watch and monitors his progress in a spreadsheet.)

Collins is a pristine example of fixed-schedule productivity in action. An author with his level of success could easily fall into an overwork trap: long nights spent updating twitter, signing partnerships, building elaborate web sites and launching product lines, speaking at every possible venue. But he avoids this fate.

Even though Collins demands over $60,000 per speech, for example, he gives fewer than 18 per year, and a third of these are donated for free to non-profit groups. He doesn’t do book tours. His web site is mediocre. He keeps his living expenses in check so that he’s not dependent on drumming up income (he and his wife have lived in the same California bungalow for the past 14 years), and he keeps only a small staff, preferring to bring on volunteers as needed.

“Mr. Collins…is quite practiced at saying ‘no,’” is how The Times described him. (He once wrote an article for USA Today titled: “Best New Years Resolution? A ‘Stop-Doing’ list.”)

His fixed-schedule approach to life comes from his simple conviction “to produce a lasting and distinctive body of work,” and his “willingness…to focus on what not to do as much as what to do” has made that possible.

He’s not alone in reaping the benefits of the fixed-schedule approach…

Elizabeth’s Conversion

When Elizabeth Grace Saunders started her first business, a professional copy-writing service, her schedule has “hazardous.”

“I would answer e-mails after going out with friends,” she told me, “and stay up until 2 a.m. finishing projects.”

At some point, she snapped. “I’m not a secretary,” she declared. “I’m not required to jump to respond to everything that crosses my path.”

Saunders adopted a 40-hour a week schedule. This new structure had two immediate impacts. First, she found herself focusing only on the most important tasks. With only a few hours to spare on business development, for example, she couldn’t justify wasting time with the small, ineffectual website tweaks and exploratory e-mails that used to keep her up late into the night. Instead she focused on the core activities that produced results, such as sales calls or the development of new products. The focus generated by this constraint ended up generating more results than her previous schedule, which was more expansive, but also more scattered.

The second impact was her discovery that she could teach her clients how to treat her.

“I’ll answer your e-mail within 24 hours (not 24 minutes), I need notice before starting a project, I will say ‘no’ if my schedule for the near future is already full, and I might schedule meetings up to a month in advance.”

“Choosing how and when I respond to requests has had a dramatic impact,” Saunders notes.

Friends and clients were impressed enough with Saunders’ lifestyle that she eventually left copywriting to become a “time coach” that works with other women in business to achieve similar results. (Her flagship service is called a Schedule Makeover.)

Here’s a typical day in Saunders’ life:

  • She’s up at 6 and by 8:30 she’s at the computer.
  • The first 1 – 2 hours of her work day are spent doing what she calls “routine processing,” which includes checking calendars, clearing e-mail inboxes, and cementing a plan to follow for the rest of the day. As Saunders describes it, this morning routine prevents her from wasting time deciding how to start, and it frees her of the “compulsion” to be checking e-mail throughout the day.
  • She continues with an hour of sales calls. This is often the most dreaded activity for the solo entrepreneur. But by having a regular place in her constrained schedule, she avoids pushing it aside.
  • The rest of the day follows the schedule she fixed in the morning: usually a mix of client assignments and at least one business development activity.
  • By 5:30 she’s done.

Most entrepreneurs work well past 5:30 (and claim that this is absolutely unavoidable), but Saunders’ business is thriving. The reason is clear: her fixed schedule forces her to do the work that produces results (sales calls, client assignments, major business development activities) and eliminates the hours of pseudowork that many use to fill their day in an effort to feel “busy” (tweaking websites, compulsive e-mail checking, chasing down small business development opportunities).

Saunders is not the only young entrepreneur I’ve met who was surprised to discover that doing less helped the bottom line…

The Baby Factor

Michael Simmons, a good friend of mine, reported a similar story. His company, the Extreme Entrepreneurship Education Corporation, expanded quickly in the years following college graduation. Around the time I was reading The 4-Hour Work Week, I started to discuss the possibility that Simmons tone down the hours. It was his company, I argued, so why not take advantage of this fact to craft an awesome life.

Among the specific topics we discussed, I remember suggesting that Simmons cut down the time spent on e-mail and social networks.

“This isn’t optional for me,” he explained. “Any of these contacts could turn into a important partner or sale.”

But then Simmons’ daughter, Halle, was born.

Simmons’ work schedule reduced from 10 to 12 hours days to 3 to 5 hour days. He took care of the baby in the morning, then worked in the afternoon while his wife, and company co-founder, took over the childcare responsibilities. Evenings were family together time.

Halle forced Simmons into the type of constrained schedule that he had previously declared impossible. And yet the business didn’t flounder.

“The baby turns ’shoulds’ into ‘musts’,” Simmons explained to me. “In the past I used to put off key decisions, or saying ‘no’, because I didn’t want to deal with the discomfort. Now I have no choice. I have to make the decisions because my time has been slashed in half.”

“Since out daughter was born about a year ago, our business has more than doubled.”

The Fixed-Schedule Effect

Collins, Saunders, and Simmons all share a similar discovery. When they constrained their schedule to the point where non-essential work was eliminated and colleagues and clients had to retrain their expectations, they discovered two surprising results.

First, the essentials — be it making sales calls, or focusing on the core research behind a book — are what really matter, and the non-essentials — be it random e-mail conversations, or managing an overhaul to your blog template — are more disposable than many believe.

Second, by focusing only the essentials, they’ll receive more attention than when your schedule was unbounded. The paradoxic effect, as with Collins’ bestsellers, or Saunders and Simmons’ fast-growing businesses, you achieve more results.

Living the Fixed-Scheduled Lifestyle

The steps to adopting fixed-schedule productivity are straightforward:

  1. Choose a work schedule that you think provides the ideal balance of effort and relaxation.
  2. Do whatever it takes to avoid violating this schedule.

This sounds simple. But of course it’s not. Satisfying rule 2 is non-trivial. If you took your current projects, obligations, and work habits, you’d probably fall well short of satisfying your ideal schedule.

Here’s a simple truth that you must confront when considering fixed-schedule productivity: sticking to your ideal schedule will require drastic actions. For example, you may have to:

  • Dramatically cut back on the number of projects you are working on.
  • Ruthlessly cull inefficient habits from your daily schedule.
  • Risk mildly annoying or upsetting some people in exchange for large gains in time freedom.
  • Stop procrastinating.

In the abstract, these are all hard goals to accomplish. But when you’re focused on a specific goal — “I refuse to work past 5:30 on weekdays!” — you’d be surprised by how much easier it becomes to deploy these strategies in your daily life.

Let’s look at one more example…

Case Study: My Schedule

My schedule from my time as a grad student provides a good case study. To reach my relatively small work hour limit, I had to be careful about how I approached my day. I saw enough bleary-eyed insomniacs around here to know how easy it is to slip into a noon to 3 a.m. routine (the infamous “MIT cycle.”)

Here are some of the techniques I regularly used to remain within the confines of my fixed schedule:

  • I’m ruthlessly results oriented. What’s the ultimate goal of a graduate student? To produce good research that answers important questions. Nothing else really matters. For some of my peers, however, their answer to this metaphysical prompt was: “work really long hours to prove that you belong.” It was as if some future arbiter of their future was going to look back at their time clock punch card and declare whether they sufficiently paid their dues. Nonsense! I wanted to produce a few good papers a year. Anything that got in the way of this goal was treated with suspicion. This results-oriented vision made it easy to keep the middling crap from crowding my schedule.
  • I’m ultra-clear about when to expect results from me. And it’s not always soon. If someone slips something onto my queue, I make an honest evaluation of when it will percolate to the top. I communicate this date. Then I make it happen when the time comes. You can get away with telling people to expect a result a long time in the future, if — and this is a big if — you actually deliver when promised. Long lead times allow to you to side step the pile-ups (which will bust a fixed-schedule) that accrue when you insist on an immature, “do things only when the deadline looms” attitude.
  • I refuse. If my queue is too crowded for a potential project to get done in time, I turn it down.
  • I drop projects and quit. If a project gets out of control and starts to sap too much time from my schedule, or strays from my results-oriented vision: I drop it. If something demonstrably more important comes along, and it conflicts with something else in my queue, I drop the less important project. Here’s a secret: no one really cares what you do on the small scale, or what things you quit. In the end you’re judged on your results. If something is hindering your production of the important results in your field, you have to ask why you’re keeping it around.
  • I’m not available. I often work in hidden nooks of the various libraries on campus, or from my apartment. I check and respond to work e-mail only a couple times a day, and never at night or on weekends. People have to wait for responses from me. It’s often hard to find me. Sometimes people get upset when they send me something urgent on Friday night that need done by Saturday morning. But eventually they get over it. Just as important, I’m not a jerk about it. I don’t have sanctimonious auto-responders about my e-mail habits. I just do what I do, and people adapt.
  • I batch and habitatize. Any regularly occurring work gets turned into a habit — something I do at a fixed time on a fixed date. For example, I work on my blog in the afternoon after lunch. I write first thing in the morning. When I was taking classes, I had reoccuring blocks set aside during the week for tackling their assignments. Habit-based schedules for regular work makes it easier to tackle the non-regular projects. It also prevents schedule-busting pile-ups.
  • I start early. Sometimes real early. On certain projects that I know are important, I don’t tolerate procrastination. It doesn’t interest me. If I need to start something 2 or 3 weeks in advance so that my queue proceeds as needed, I do so.
  • I don’t ask permission. I think it’s wrong to assume that you automatically have the right to work whatever schedule you want. It’s a valuable prize that most be earned. And results are the currency you must spend to buy it. So long as I’m actually accomplishing the big picture goals I’m paid to accomplish, I feel comfortable to handle my schedule my own way. If I was producing mediocre crap, people would have a right to demand more access.

Conclusion

You could fill any arbitrary number of hours with what feels to be productive work. Between e-mail, and crucial web surfing, and to-do lists that, in the age of David Allen, grow to lengths that rival the bible, there is always something you could be doing. At some point, however, you have to put a stake in the ground and say: I know I have a never-ending stream of work, but this is when I’m going to face it. If you don’t, you’ll let this work push you around like a bully. It will force you into tiring, inefficient schedules, and you’ll end up more stressed and no more accomplished.

Fix the schedule you want. Then make everything else fit around your needs. Be flexible. Be efficient. If you can’t make it fit: change your work. But in the end, don’t compromise.

Cal Newport is an MIT postdoc, author, and founder of Study Hacks, the Internet’s most popular student advice blog.

10/14/2009

Balancing your work and colleagues

Working in a team environment is important. This article covers your relationship when there are interlink between your work and your collegues.
http://blogs.harvardbusiness.org/hmu/2009/10/when-a-colleagues-mistakes-aff.html

When a Colleague's Mistakes Affect You

12:39 PM Thursday October 8, 2009
by Amy Gallo

In an attempt to function in this increasingly complex world, organizations are becoming increasingly complex themselves. They are built on collaborative partnerships, dotted lines and matrixes, all of which mean more and more of your work depends on the work of someone else. When a colleague is making mistakes, this interconnectedness can feel like a major pitfall.

Yet a job where you don't interact with others is nearly impossible to find, not to mention somewhat boring. So, you need to figure out how to make relationships work. Every management expert would agree that positive working relationships are essential to getting things done. So what do you do when a colleague is not doing her part and it's affecting your work? Fortunately, handling your colleague's mistakes in a productive way cannot only help remove barriers but may also help your colleague, and you, gain new skills.

What the Experts Say
The type of mistakes you might be affected by vary greatly. A colleague may miss deadlines, not produce the work required, make errors in calculations or even provide you with misinformation. These may all be innocent mistakes fueled by lack of knowledge, experience, or awareness, but without more information you can't be sure and won't be able to act.

Diagnose the Issue
The first step in addressing your colleague's behavior is to understand what's really going on. Try to determine if the problem is short-term, such as a personal issue at home, a particularly heavy workload, or a health problem — or long-term, such as a lack of skill or a poor cultural fit with the organization. As Allan Cohen, the Edward A. Madden Distinguished Professor of Global Leadership at Babson College and author of Influence without Authority points out, "What you don't know is if the person is getting the right support from others, if a non-work issue has cropped up, or if perhaps the person doesn't understand the issue like you do." This diagnosis can be done by looking for corroborating evidence from other colleagues and checking that your understanding of the issue aligns with theirs. Deborah Ancona, Seley Distinguished Professor of Management at the MIT Sloan School of Management and author of X-Teams: How to Build Teams that Lead, Innovate, and Succeed, warns that it's important to "be careful because you don't want to make anyone else see the problem if they haven't already."

Approach Your Colleague Directly
The best approach is to go to the source — speak with your colleague directly. This conversation should take place in an informal, private setting and you should always follow good feedback rules. Don't accuse or blame your colleague. Use concrete examples to explain what you are seeing and its impact on you.

Richard Hackman, the Edgar Pierce Professor of Social and Organizational Psychology at Harvard University and author of Leading Teams: Setting the Stage for Great Performances says, "We tend to attribute what's going wrong to an individual and specifically to something dispositional about them." This is dangerous because you are then attacking a person — not their behavior. Most importantly, to establish a common ground with your colleague, discuss the issue in context of mutual goals. "You want to ask 'What can we do to achieve our goals?' not 'You screwed up again,'" Hackman says.

Don't assume you know exactly why the colleague is making mistakes. As Hackman points out, "You need to be open to learning that you're wrong about the situation." Use an inquiry mode and ask questions like "What's going on?" and "Am I misreading or misunderstanding the situation?" In fact, you may discover that your colleague wasn't aware of the mistakes or how her actions appeared to others.

Offer Help and Support
If a short-term issue is causing the mistakes, such as a difficult time at home or an illness in the family, you should offer to help. You may even consider covering for the person as a way to build a positive relationship. As Ancona says, "This world is all about connections and not only do you not want to jeopardize the relationship, but you want to build it." Covering may mean picking up extra work, spending time double checking her work or offering to explain to other colleagues what is going on. Covering doesn't mean that you should lie on behalf of your colleague, nor does it mean a permanent shift in job responsibilities. You should only cover when you have an explicit agreement that the situation is temporary until circumstances change.

If you find that the source of the mistakes is a longer term issue, such as a lack of skill, you can offer to help brainstorm solutions. Perhaps your colleague can find a course that will help her build up her skills, or go to her manager to ask for assistance.

It is rarely a good idea to let your colleague continue to make mistakes. Cohen says, "In very competitive organizations, the temptation is to let people die on their own swords. But in those environments, it's even more appreciated when you don't let them die." By being generous now, you are incurring the obligation of your colleague to help you in the future. This reciprocity is often what strong professional relationships are built on.

Protect Yourself
It's possible that you'll discover your colleague is intentionally making mistakes to undermine you or take credit for your work. "These political situations are far messier to deal with," Ancona says. Fortunately they are far rarer as well. Cohen says that he has only met a handful of people throughout his career that are "true snakes." He advises, "It should be your last assumption that the colleague is making mistakes deliberately."

Ancona offers, "You can try to confront the person directly, hoping that may make him or her back down." If that doesn't work you can use the following tactics:

  • Make your work visible. Avoid bragging. Use the active voice instead of the passive voice. For example, try saying "I prepared these analyses that show where we should be investing resources" rather than "These analyses show where we should be investing resources."
  • Offer to lead a presentation when joint work is being shared. People often think of the person in the front of the room as the leader, or at least one of the more active participants in a project.
  • Take credit where credit is due. This doesn't mean you brag. Instead, showcase your involvement or let your manager know exactly what part of the project is the result of your efforts.

In these political situations, don't resort to badmouthing your colleague. Negative comments often reflect as badly on you as they do on the person you are speaking about.

When the Issue Continues...
Despite all your efforts and care in handling the situation, it is possible that the mistakes will continue. This isn't only an inconvenience, it could hinder your career. The experts suggest you take a few approaches to preserving your reputation. If possible, avoid working with that person in the future. If that's not possible you can employ some of the same tactics listed above if the person were undermining you. Also, you should consider approaching your manager. Explain what you've done to date and ask for her advice. Be clear you are not asking her to intervene.

The experts agree that things would need to be very serious, e.g. the project you're working on is headed for failure, before you approach your colleague's manager. There is a major risk that you could alienate your colleague and permanently damage the relationship (see Allan Cohen's experience in Case Study #2 below). In many organizational cultures, talking to a person's manager can be coded as not being "a team player."

Principles to Remember
Do:

  • Keep in mind that relationships matter
  • Be direct and honest with your colleague about how the mistakes are affecting you
  • Offer help if the colleague is struggling with a short-term issue such as a heavy workload or a personal issue

Don't:

  • Badmouth your colleague to anyone in the organization
  • Assume your colleague is aware of the mistakes
  • Go to your colleague's manager without first talking to your colleague and your manager

Case Study #1: Stopping Mistakes Before They Happen
For Drew Chatto, a software engineer who worked at VeriSign, close collaboration wasn't just part of his job, it was his job. While he wrote code on his own, it was always reviewed by others and then put together with his colleagues' work to form a complete product. Eddie, one of Drew's colleagues, was a less experienced — although not less talented — engineer. Because Eddie was relatively new to VeriSign he wasn't familiar with the specifics of how the company wrote code. Instead of asking questions, he made assumptions and often finished code quickly. During code review, Drew regularly found mistakes with Eddie's work and had to ask him to rewrite it. Eddie never argued but he continued to make similar mistakes. Tired of having the same conversation over and over, Drew offered to help Eddie think through his code assignments before he began writing. These conversations gave Eddie the opportunity to ask how specific things were done at VeriSign, instead of making the decisions on his own. As Drew said, "I couldn't expect him to know the right questions." Eddie was open to the suggestion; he knew Drew had more experience, and he was likely tired of having to redo his work. Drew's approach helped Eddie avoid mistakes before they happened. While those preliminary conversations took more of Drew's time, they saved him time in the code review process and built a stronger, less contentious relationship, with Eddie.

Case Study #2: The Risk of Escalation
Allan Cohen is a professor and dean at Babson College, and one of our experts from above. In a former role at a major university, his good friend and colleague Carl served as the Associate Dean of Allan's department. Allan was proposing a new program that required Carl's approval. Despite Carl's background in accounting, he kept making accounting errors when attributing costs to the new program. Worried about him, Allan stopped by his boss's office one afternoon to explain what was going on. His boss was the Dean of the School and as such, was also Carl's boss. In the middle of the conversation, there was a knock at the door and Carl walked in. Carl's office was directly next door and he explained that he had heard the entire conversation because of a chip in the concrete wall between the two offices. Allan explained, "We never mentioned the incident again but it took me well over a year to repair the relationship." Allan regrets not going to Carl directly first. "If I had, I could've saved the relationship and maybe even helped him."

9/08/2009

Perl profiler

I thought I put this here before but I guess it was only in my imagination...

Here is a routine to use Dprof to profile your perl code for debug and optimize your code.
Src: http://docstore.mik.ua/orelly/perl/prog3/ch20_06.htm

20.6. The Perl Profiler

Do you want to make your program faster? Well, of course you do. But first you should stop and ask yourself, "Do I really need to spend time making this program faster?" Recreational optimization can be fun,[2] but normally there are better uses for your time. Sometimes you just need to plan ahead and start the program when you're going on a coffee break. (Or use it as an excuse for one.) But if your program absolutely must run faster, you should begin by profiling it. A profiler can tell you which parts of your program take the most time to execute, so you won't waste time optimizing a subroutine that has an insignificant effect on the overall execution time.

[2] Or so says Nathan Torkington, who contributed this section of the book.

Perl comes with a profiler, the Devel::DProf module. You can use it to profile the Perl program in mycode.pl by typing:

perl -d:DProf mycode.pl
Even though we've called it a profiler--since that's what it does--the mechanism DProf employs is the very same one we discussed earlier in this chapter. DProf is just a debugger that records the time Perl entered and left each subroutine.

When your profiled script terminates, DProf will dump the timing information to a file called tmon.out. The dprofpp program that came with Perl knows how to analyze tmon.out and produce a report. You may also use dprofpp as a frontend for the whole process with the -p switch (see described later).

Given this program:

outer();

sub outer {
for (my $i=0; $i < 100; $i++) { inner() }
}

sub inner {
my $total = 0;
for (my $i=0; $i < 1000; $i++) { $total += $i }
}

inner();
the output of dprofpp is:
Total Elapsed Time = 0.537654 Seconds
User+System Time = 0.317552 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
85.0 0.270 0.269 101 0.0027 0.0027 main::inner
2.83 0.009 0.279 1 0.0094 0.2788 main::outer
Note that the percentage numbers don't add up to 100. In fact, in this case, they're pretty far off, which should tip you off that you need to run the program longer. As a general rule, the more profiling data you can collect, the better your statistical sample. If we increase the outer loop to run 1000 times instead of 100 times, we'll get more accurate results:
Total Elapsed Time = 2.875946 Seconds
User+System Time = 2.855946 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
99.3 2.838 2.834 1001 0.0028 0.0028 main::inner
0.14 0.004 2.828 1 0.0040 2.8280 main::outer
The first line reports how long the program took to run, from start to finish. The second line displays the total of two different numbers: the time spent executing your code ("user") and the time spent in the operating system executing system calls made by your code ("system"). (We'll have to forgive a bit of false precision in these numbers--the computer's clock almost certainly does not tick every millionth of a second. It might tick every hundredth of a second if you're lucky.)

The "user+system" times can be changed with command-line options to dprofpp. -r displays elapsed time, -s displays system time only, and -u displays user time only.

The rest of the report is a breakdown of the time spent in each subroutine. The "Exclusive Times" line indicates that when subroutine outer called subroutine inner, the time spent in inner didn't count towards outer's time. To change this, causing inner's time to be counted towards outer's, give the -I option to dprofpp.

For each subroutine, the following is reported: %Time, the percentage of time spent in this subroutine call; ExclSec, the time in seconds spent in this subroutine not including those subroutines called from it; CumulS, the time in seconds spent in this subroutine and those called from it; #Calls, the number of calls to the subroutine; sec/call, the average time in seconds of each call to the subroutine not including those called from it; Csec/c, the average time in seconds of each call to the subroutine and those called from it.

Of those, the most useful figure is %Time, which will tell you where your time goes. In our case, the inner subroutine takes the most time, so we should try to optimize that subroutine, or find an algorithm that will call it less. :-)

Options to dprofpp provide access to other information or vary the way the times are calculated. You can also make dprofpp run the script for you in the first place, so you don't have to remember the -d:DProf switch:

-p SCRIPT

Tells dprofpp that it should profile the given SCRIPT and then interpret its profile data. See also -Q.

-Q

Used with -p to tell dprofpp to quit after profiling the script, without interpreting the data.

-a

Sort output alphabetically by subroutine name rather than by decreasing percentage of time.

-R

Count anonymous subroutines defined in the same package separately. The default behavior is to count all anonymous subroutines as one, named main::__ANON__.

-I

Display all subroutine times inclusive of child subroutine times.

-l

Sort by number of calls to the subroutines. This may help identify candidates for inlining.

-O COUNT

Show only the top COUNT subroutines. The default is 15.

-q

Do not display column headers.

-T

Display the subroutine call tree to standard output. Subroutine statistics are not displayed.

-t

Display the subroutine call tree to standard output. Subroutine statistics are not displayed. A function called multiple (consecutive) times at the same calling level is displayed once, with a repeat count.

-S

Produce output structured by the way your subroutines call one another:

main::inner x 1         0.008s
main::outer x 1 0.467s = (0.000 + 0.468)s
main::inner x 100 0.468s
Read this as follows: the top level of your program called inner once, and it ran for 0.008s elapsed time, and the top level called outer once and it ran for 0.467s inclusively (0s in outer itself, 0.468s in the subroutines called from outer) calling inner 100 times (which ran for 0.468s). Whew, got that?

Branches at the same level (for example, inner called once and outer called once) are sorted by inclusive time.

-U

Do not sort. Display in the order found in the raw profile.

-v

Sort by average time spent in subroutines during each call. This may help identify candidates for hand optimization by inlining subroutine bodies.

-g subroutine

Ignore subroutines except subroutine and whatever is called from it.

Other options are described in dprofpp(1), its standard manpage.

DProf is not your only choice for profiler. CPAN also holds Devel::SmallProf, which reports the time spent in each line of your program. That can help you figure out if you're using some particular Perl construct that is being surprisingly expensive. Most of the built-in functions are pretty efficient, but it's easy to accidentally write a regular expression whose overhead increases exponentially with the size of the input. See also the section Section 20.2, "Efficiency" in Chapter 24, "Common Practices", for other helpful hints.

9/03/2009

cygwin change default home directory

Just edit /etc/passwd file with your user id line to change your home directory when you start the console in cygwin.

8/27/2009

Perl display the progress indicator

In most cases in perl progress indicator, most people uses print statement without carry over or '\n' to show where you are. The problem with this s that unless that buffer is set to unbuffer, it will waitin until carry over before it is displayed on the terminal. So, unless it's done, you are not going to see the progress...

Simple solution to this is to set unbuffer as setting $| to 1. All done, simple.
$| = 1;

Now it will display as it progresses.

6/23/2009

Want to get promoted? Read this.

Great tips on How to Get Promoted from lifehack.org. Quote:

1. Do your job well. I know that this is stating the obvious but it is the starting point. For promotion it is a necessary but not a sufficient requirement that you perform your current duties diligently. Many people think that this is all they need to do and that the rewards, recognition and promotion will follow. Corporate life is not ‘fair’ in this sense. Many people do great work and are passed over. You need to excel in your current role and do much more to climb the ladder.

2. Get noticed. One of the best ways to be promoted is if a senior manager in another department wants you. But this can only happen if they are aware of you. So you have to find ways to get in front of other people, particularly senior people, in a way that displays your good qualities and makes you memorable.

3. Volunteer. If someone is needed to present a proposal on behalf of your department, volunteer. If members are needed for a cross-departmental task force, volunteer. If the social committee want someone to help organize the staff barbecue, volunteer. Take on additional responsibilities both inside and outside your department. This shows that you are willing to get involved and it gets you noticed.

4. Discuss your ambitions with your manager. Make sure that your boss and your boss’s boss know that you are keen to be promoted. You can do this in a quiet professional way. Do not threaten or demand. Have a discussion where you ask the question, ‘What do I have to do to get promoted?’ Develop a plan. Senior managers understand ambition and there is nothing wrong with being ambitious so make sure that they understand your goals.

5. Work well with people. Many people who are technically proficient and excellent at task management do not get promoted because they lack people skills. Be aware of how you are perceived. Ask for feedback. It is not a question of popularity; it is more about communication, trust and dependability. Try not to make enemies. Find ways to work effectively with other people and you are more likely to be seen as ‘management material’.

6. Contribute ideas. Make positive, constructive suggestions for how things could be done better. Most managers (though not all) welcome this and it will signal that you are someone who can think about bigger issues. It shows that you welcome rather than fear change.

7. If you cannot move up, move across. Look for ways to broaden your experience. It you cannot move up in your area then consider moving across into a different area of the business at the same level so that you can learn new skills and make new contacts.

8. Have a plan. Set yourself goals for advancement and measure progress against them. If you need to acquire certain skills or experiences then plan to do so. If you are turned down for promotion, ask why. If you cannot meet your plan in your current organization or if you can make no more progress or if you no longer enjoy the work then look elsewhere. There are plenty of opportunities for ambitious people who work hard and are keen to learn.

5/27/2009

How to download a file from the Web using Perl

Simple enough, use LWP!

Back to Index

NAME

get, head, getprint, getstore, mirror - Procedural LWP interface

SYNOPSIS

 perl -MLWP::Simple -e 'getprint "http://www.sn.no"'

use LWP::Simple;
$content = get("http://www.sn.no/")
if (mirror("http://www.sn.no/", "foo") == RC_NOT_MODIFIED) {
...
}

if (is_success(getprint("http://www.sn.no/"))) {
...
}

DESCRIPTION

This interface is intended for those who want a simplified view of the libwww-perl library. It should also be suitable for one-liners. If you need more control or access to the header fields in the requests sent and responses received you should use the full object oriented interface provided by the LWP::UserAgent module.

The following functions are provided (and exported) by this module:

get($url)

The get() function will fetch the document identified by the given URL and return it. It returns undef if it fails. The $url argument can be either a simple string or a reference to a URI object.

You will not be able to examine the response code or response headers (like 'Content-Type') when you are accessing the web using this function. If you need that information you should use the full OO interface (see LWP::UserAgent).

head($url)

Get document headers. Returns the following 5 values if successful: ($content_type, $document_length, $modified_time, $expires, $server)

Returns an empty list if it fails. In scalar context returns TRUE if successful.

getprint($url)
Get and print a document identified by a URL. The document is printed to STDOUT as data is received from the network. If the request fails, then the status code and message are printed on STDERR. The return value is the HTTP response code.
getstore($url, $file)
Gets a document identified by a URL and stores it in the file. The return value is the HTTP response code.
mirror($url, $file)
Get and store a document identified by a URL, using If-modified-since, and checking the Content-Length. Returns the HTTP response code.

This module also exports the HTTP::Status constants and procedures. These can be used when you check the response code from getprint(), getstore() and mirror(). The constants are:

   RC_CONTINUE
RC_SWITCHING_PROTOCOLS
RC_OK
RC_CREATED
RC_ACCEPTED
RC_NON_AUTHORITATIVE_INFORMATION
RC_NO_CONTENT
RC_RESET_CONTENT
RC_PARTIAL_CONTENT
RC_MULTIPLE_CHOICES
RC_MOVED_PERMANENTLY
RC_MOVED_TEMPORARILY
RC_SEE_OTHER
RC_NOT_MODIFIED
RC_USE_PROXY
RC_BAD_REQUEST
RC_UNAUTHORIZED
RC_PAYMENT_REQUIRED
RC_FORBIDDEN
RC_NOT_FOUND
RC_METHOD_NOT_ALLOWED
RC_NOT_ACCEPTABLE
RC_PROXY_AUTHENTICATION_REQUIRED
RC_REQUEST_TIMEOUT
RC_CONFLICT
RC_GONE
RC_LENGTH_REQUIRED
RC_PRECONDITION_FAILED
RC_REQUEST_ENTITY_TOO_LARGE
RC_REQUEST_URI_TOO_LARGE
RC_UNSUPPORTED_MEDIA_TYPE
RC_INTERNAL_SERVER_ERROR
RC_NOT_IMPLEMENTED
RC_BAD_GATEWAY
RC_SERVICE_UNAVAILABLE
RC_GATEWAY_TIMEOUT
RC_HTTP_VERSION_NOT_SUPPORTED

The HTTP::Status classification functions are:

is_success($rc)
True if response code indicated a successful request.
is_error($rc)
True if response code indicated that an error occured.

The module will also export the LWP::UserAgent object as $ua if you ask for it explicitly.

The user agent created by this module will identify itself as "LWP::Simple/#.##" (where "#.##" is the libwww-perl version number) and will initialize its proxy defaults from the environment (by calling $ua->env_proxy).

5/20/2009

How to optimize your perl code!

Great resource on how to Optimize Perl. This helped me speed up my 3 hour long script to be able to run under 10min!!!

Quote below:

Sloppy programming, sloppy performance

I'll be honest: I love Perl and I use it everywhere. I've written Web sites, administration scripts, and games using Perl. I frequently save time by getting Perl to do and check things automatically for me, everything from my lottery numbers to the stock markets, and I even use it to automatically file my e-mail. Because Perl makes it so easy to do all of these things, there's a tendency to forget about optimization. In many cases this isn't the end of the world. So what if it takes an extra few milliseconds to look up your stock reports or parse those log files?

However, those same lazy habits that cost milliseconds in a small application are multiplied when dealing with larger scale development projects. It's the one area where the Perl mantra of TMTOWTDI (There's More Than One Way To Do It) starts to look like a bad plan. If you need speed, there may be only one or two ways to achieve the fastest results, whereas there are many slower alternatives. Ultimately, sloppy programming -- even if you achieve the desired result -- is going to result in sloppy performance. So, in this article I'm going to look at some of the key techniques you can use to squeeze those extra cycles out of your Perl application.


Approaching optimization

First of all, it's worth remembering that Perl is a compiled language. The source code you write is compiled on the fly into the bytecode that is executed. The bytecode is itself based on a range of instructions, all of which are written in a highly optimized form of C. However, even within these instructions, some operations that can achieve similar results are more highly optimized than others. Overall, this means that it's the combination of the logic sequence you use and the bytecode that is generated from this that ultimately affects performance. The differences between certain similar operations can be drastic. Consider the code in Listings 1 and 2. Both create a concatenated string, one through ordinary concatenation and the other through generating an array and concatenating it with join.


Listing 1. Concatenating a string, version 1
my $string = 'abcdefghijklmnopqrstuvwxyz';
my $concat = '';

foreach my $count (1..999999)
{
$concat .= $string;
}




Listing 2. Concatenating a string, version 2
my $string = 'abcdefghijklmnopqrstuvwxyz';
my @concat;

foreach my $count (1..999999)
{
push @concat,$string;
}
my $concat = join('',@concat);

Running Listing 1, I get a time of 1.765 seconds, whereas Listing 2 requires 5.244 seconds. Both generate a string, so what's taking up the time? Conventional wisdom (including that of the Perl team) would say that concatenating a string is a time-expensive process, because we have to extend the memory allocation for the variable and then copy the string and its addition into the new variable. Conversely, adding a string to an array should be relatively easy. We also have the added problem of duplicating the string concatenation using join(), which adds an extra second.

The problem, in this instance, is that push()-ing strings onto an array is time-intensive; first of all, we have a function call (which means pushing items onto a stack, and then taking them off), and we also have the additional array management overhead. In contrast, concatenating a string is pretty much just a case of running a single opcode to append a string variable to an existing string variable. Even if we set the array size to alleviate the overhead (using $#concat = 999999), we still only save another second.

The above is an extreme example, and there are times when using an array will be much quicker than using strings; a good example here is if you need to reuse a particular sequence but with an alternate order or different interstitial character. Arrays are also useful, of course, if you want to rearrange or reorder the contents. By the way, in this example, an even quicker way of producing a string that repeats the alphabet 999,999 times would be to use:

$concat = 999999 x 'abcdefghijklmnopqrstuvwxyz';

Individually, many of the techniques covered here won't make a huge difference, but combined in one application, you could shave a few hundred milliseconds, or even seconds, off of your Perl applications.


Use references

If you work with large arrays or hashes and use them as arguments to functions, use a reference instead of the variable directly. By using a reference, you tell the function to point to the information. Without a reference, you copy the entire array or hash onto the function call stack, and then copy it again in the function. References also save memory (which reduces footprint and management overheads) and simplify your programming.


String handling

If you are using static strings in your application a lot -- for example, in a Web application -- remember to use single quotes rather than doubles. Double quotes force Perl to look for a potential interpolation of information, which adds to the overhead of printing out the string:

print 'A string','another string',"\n";

I've also used commas to separate arguments rather than using a period to concatenate the string first. This simplifies the process; print simply sends each argument to the output file. Concatenation would concatenate the string and print it as one argument.


Loops

As you've already seen, function calls with arguments are expensive, because for the function call to work, Perl has to put the arguments onto the call stack, call the function, and then receive the responses back through the stack again. All of this requires overhead and processing that we could probably do without. For this reason, excessive function calls in a loop are generally a bad idea. Again, it comes down to a comparison of numbers. Looping through 1,000 items and passing information to a function will trigger the function call 1,000 times. To get around this, I just switch the sequence around. Instead of using the format in Listing 3, I use the approach in Listing 4.


Listing 3. Loop calling functions
foreach my $item (keys %{$values})
{
$values->{$item}->{result} = calculate($values->{$item});
}

sub calculate
{
my ($item) = @_;
return ($item->{adda}+$item->{addb});
}




Listing 4. Function using loops
calculate_list($values);

sub calculate_list
{
my ($list) = @_;
foreach my $item (keys %{$values})
{
$values->{$item}->{result} = ($item->{adda}+$item->{addb});
}
}

Better still, in a simple calculation like this one or for any straightforward loop work, use map:

map { $values->{$_}->{result} = $values->{$_}->{adda}+$values->{$_}->{addb} } keys %{$values};

Remember also that each iteration through the loop wastes time, so rather than working through the same loop a number of times, try to perform all the actions in one pass through the loop.


Sorts

Another common operation related to loops is sorting information, particularly keys in a hash. It's tempting in this instance to embed some processing of list elements into the sort operation, such as the one shown here in Listing 5.


Listing 5. Bad sorting
my @marksorted = sort {sprintf('%s%s%s',
$marked_items->{$b}->{'upddate'},
$marked_items->{$b}->{'updtime'},
$marked_items->{$a}->{itemid}) <=>
sprintf('%s%s%s',
$marked_items->{$a}->{'upddate'},
$marked_items->{$a}->{'updtime'},
$marked_items->{$a}->{itemid}) } keys %{$marked_items};

This is a fairly typical sort of complex data, in this case ordering something by date, time, and ID number by concatenating the numbers into a single number that we can then sort numerically. The problem is that the sort works through the list of items and moves them up or down through the list based on the comparison operation. In effect, this is a type of loop, but unlike the loop examples we've already seen, a sprintf call has to be made for each comparison. That's at least twice for each iteration, and the exact number of iterations through the list will depend how ordered it was to begin with. For example, with a 10,000-item list you could expect to call sprintf over 240,000 times.

The solution is to create a list that contains the sort information, and generate the sort field information just once. Taking the sample in Listing 5 as a guide, I'd rewrite that fragment into something like the code in Listing 6.


Listing 6. Better sorting
map { $marked_items->{$_}->{sort} = sprintf('%s%s%s',
$marked_items->{$_}->{'upddate'},
$marked_items->{$_}->{'updtime'},
$marked_items->{$_}->{itemid}) } keys %{$marked_items};
my @marksorted = sort { $marked_items->{$b}->{sort} <=>
$marked_items->{$a}->{sort} } keys %{$marked_items};

Instead of calling sprintf all those times, we call it just once for each item in the hash in order to generate a sort field in the hash, and then use that sort field directly during the sort. The sorting process only has to access the sort field's value. You have cut down the calls on that 10,000-item hash from 240,000 to just 10,000. It depends on what you are doing in that sort section originally, but it's possible to save as much as half the time it would take using the method shown in Listing 6.

If you produce these hashes through results from a database query -- through MySQL or similar -- using sorting within the query and then recording the order as you build the hash, you won't need to iterate over the information again.


Using short circuit logic

Related to the sort operation is how to work through a list of alternative values. Using many if statements can be incredibly time consuming. For example, look at the code in Listing 7.


Listing 7. Making a choice
if ($userchoice > 0)
{
$realchoice = $userchoice;
}
elsif ($systemchoice > 0)
{
$realchoice = $systemchoice;
}
else
{
$realchoice = $defaultchoice;
}

Aside from the waste of space in terms of sheer content, there are a couple of problems with this structure. From a programming perspective, it has the issue that it never checks if any of the variables have a valid value, a fact that would be highlighted if warnings were switched on. Second, it has to check each option until it gets to the one it wants, which is wasteful, as comparison operations (particularly on strings) are time consuming. Both problems can be solved by using short circuit logic.

If you use the logical || operator, Perl will use the first true value it comes across, in order, from left to right. The moment it finds a valid value, it doesn't bother processing any of the other values. In addition, because Perl is looking for a true value, it also ignores undefined values without complaining about them. So we can rewrite the above into a single line:

$realchoice = $userchoice || $systemchoice || $defaultchoice;

If $userchoice is a true value, Perl doesn't even look at the other variables. If $userchoice is false (see Table 1), then Perl checks the value of $systemchoice and so on until it gets to the last value, which is always used, whether it's true or not.

Table 1. $userchoice values

Value Logical value
Negative numberTrue
ZeroFalse
Positive numberTrue
Empty stringFalse
Non-empty stringTrue
Undefined valueFalse
Empty list (including hashes)False
List with at least one element (including hashes)True

Use AutoLoader

One of the most expensive portions of the execution of a Perl script is the compilation of source code into the bytecode that is actually executed. On a small script with no external modules, the process takes milliseconds. But start to include a few of your own external modules and the time increases. The reason is that Perl does little more with a module than importing the text and running it through the same compilation stage. That can turn your 200 line script into a 10,000 or 20,000 line script very quickly. The result is that you increase the initial stages of the compilation process before the script even starts to do any work.

During the normal execution of your script, it may be that you only use 10 percent, or even 5 percent, of all the functions defined in those modules. So why load them all when you start the script? The solution is to use AutoLoader, which acts a bit like a dynamic loader for Perl modules. This uses files generated by the AutoSplit system, which divides up a module into the individual functions. When you load the module through use, all you do is load the stub code for the module. It's only when you call a function contained within the module that the AutoLoader steps in and then loads and compiles the code only for that function. The result is that you convert that 20,000 line script with modules back into a 200-line script, speeding up the initial loading and compilation stages.

I've saved as much as two seconds just by converting one of my applications to use the AutoLoader system in place of preloading. It's easy to use by just changing your modules from the format shown in Listing 8 to that shown in Listing 9, and then making sure to use AutoSplit to create the loading functions you need. Note that you don't need to use Exporter any more; AutoLoader handles the loading of individual functions automatically without you have to explicitly list them.


Listing 8. A standard module
package MyModule;

use OtherModule;
require 'Exporter';
@EXPORT = qw/MySub/;

sub MySub
{
...
}

1;




Listing 9. An autoloading module
package MyModule;

use OtherModule;
use AutoLoader 'AUTOLOAD';

1;

__END__

sub MySub
{
...
}

The main difference here is that functions you want to autoload are no longer defined within the module's package space but in the data section at the end of the module (after the __END__ token). AutoSplit will place any functions defined here into the special AutoLoader files. To split up the module, use the following command line:

perl -e 'use AutoSplit; autosplit($ARGV[0], $ARGV[1], 0, 1, 1)' MyModule.pm auto


Using bytecode and the compiler back ends

There are three ways to use the compiler: bytecode production, full compilation, or simply as a debugging/optimizing tool. The first two methods rely on converting your original Perl source into its compiled bytecode form and storing this precompiled version for execution. This is best used through the perlcc command. These two modes follow the same basic model but produce the final result differently. In bytecode mode, the resulting compiled bytecode is written out to another Perl script. The script consists of the ByteLoader preamble, with the compiled code stored as a byte string. To create this bytecode version, use the -B option to the perlcc command. For example:

$ perlcc -B script.pl

This will create a file, a.out. The output, however, is not very Web friendly. The resulting file can be executed with any Perl executable on any platform (Perl bytecode is platform independent):

$ perl a.out

What this does is save Perl from having to compile the script from its source code into the bytecode each time. Instead, it just runs the bytecode that was generated. This is similar to the process behind Java compilation and is in fact that same one-step away from being a truly compiled form of the language. On short scripts, especially those that use a number of external modules, you probably won't notice a huge speed increase. On larger scripts that "stand alone" without a lot of external module use, you should see a noticeable improvement.

The full compilation mode is almost identical, except that instead of producing a Perl script with the compiled bytecode embedded in it, perlcc produces a version embedded into C source that is then compiled into a full-blown, standalone executable. This is not cross-platform compatible, but it does allow you to distribute an executable version of a Perl script without giving out the source. Note, however, that this doesn't convert the Perl into C, it just embeds Perl bytecode into a C-based application. This is actually the default mode of perlcc, so a simple: $ perlcc script.pl will create, and compile, a standalone application called a.out.

One of the lesser-known solutions for both debugging and optimizing your code is to use the Perl compiler with one of the many "back ends."

The back ends are actually what drive the perlcc command, and it's possible to use a back-end module directly to create a C source file that you can examine. The Perl compiler works by taking the generated bytecode and then outputting the results in a variety of different ways. Because you're looking at the opcodes generated during the compilation stage, you get to see the code after Perl's own internal optimizations have been applied. Providing you know the Perl opcodes, you can begin to identify where the potential bottlenecks might be. From a debugging perspective, go with back ends such as Terse (which is itself a wrapper on Concise) and Showlex. You can see in Listing 10 what the original Listing 1 looks like through the Terse back end.


Listing 10. Using Terse to study bytecode
LISTOP (0x306230) leave [1]
OP (0x305f60) enter
COP (0x3062d0) nextstate
BINOP (0x306210) sassign
SVOP (0x301ab0) const [7] PV (0x1809f9c) "abcdefghijklmnopqrstuvwxyz"
OP (0x305c30) padsv [1]
COP (0x305c70) nextstate
BINOP (0x305c50) sassign
SVOP (0x306330) const [8] PV (0x180be60) ""
OP (0x306310) padsv [2]
COP (0x305f20) nextstate
BINOP (0x305f00) leaveloop
LOOP (0x305d10) enteriter [3]
OP (0x305cf0) null [3]
UNOP (0x305cd0) null [141]
OP (0x305e80) pushmark
SVOP (0x3065d0) const [9] IV (0x180be30) 1
SVOP (0x3065f0) const [10] IV (0x1801240) 999999
UNOP (0x305ee0) null
LOGOP (0x305ec0) and
OP (0x305d50) iter
LISTOP (0x305e60) lineseq
COP (0x305e10) nextstate
BINOP (0x305df0) concat [6]
OP (0x305d70) padsv [2]
OP (0x305dd0) padsv [1]
OP (0x305ea0) unstack
concat1.pl syntax OK


Other tools

What I've covered here looks entirely at the code that makes up your applications. While that's where most of the problems will be, there are tools and systems you can use that can help identify and locate problems in your code that might ultimately help with performance.

Warnings/strict execution

It's a common recommendation, but it really can make a difference. Use the warnings and strict pragmas to ensure nothing funny is going on with variable use, typos, and other inconsistencies. Using them in all your scripts will help you eliminate all sorts of problems, many of which can be the source of performance bottlenecks. Common faults picked up by these pragmas are ambiguous references and de-references, use of undefined values, and some help identifying typos for unused or undefined functions.

All of this help, though, comes at a slight performance cost. I keep warnings and strict on while programming and debugging, and I switch it off once the script is ready to be used in the real world. It won't save much, but every millisecond counts.

Profiling

Profiling is a useful tool for optimizing code, but all it does is identify the potential location of the problem; it doesn't actually point out what the potential issue is or how to resolve it. Also, because profiling relies on monitoring the number of executions of different parts of your application it can, on occasion, give misleading advice about where a problem lies and the best approach for resolving it.

However, profiling is still a useful, and often vital, part of the optimization process. Just don't rely on it to tell you everything you need to know.

Debugging

To me, a badly optimized program means that it has a bug. The reverse is also true: bugs often lead to performance problems. Classic examples are badly de-referenced variables or reading and/or filtering the wrong information. It doesn't matter whether your debugging technique involves using print statements or the full-blown debugger provided by Perl. The sooner you eliminate the bugs, the sooner you will be able to start optimizing your application.


Putting it all together

Now that you know the techniques, here is the way to go about using them together to produce optimized applications. I generally follow this sequence when optimizing:

  1. Write the program as optimized as possible using the techniques above. Once you start to use them regularly, they become the only way you program.
  2. Once the program is finished or at least in a releasable state, go through and double check that you are using the most efficient solution by hand by reading the code. You'll be able to spot a number of issues just by re-reading, and you might pick up a few potential bugs, too.
  3. Debug your program. Bugs can cause performance problems, so you should always eliminate the bugs first before doing a more intense optimization.
  4. Run the profiler. I always do this once on any serious application, just to see if there's something -- often obvious -- that I might have missed.
  5. Go back to step 1 and repeat. I've lost track of the number of times I've completely missed a potential optimization the first time around. Either I'll go back and repeat the process two or three times in one session, or I'll leave, do another project, and return a few days, weeks, or months later. Weeks and months after, you'll often have found an alternative way of doing something that saves time.

At the end of the day, there is no magic wand that will optimize your software for you. Even with the debugger and profiler, all you get is information about what might be causing a performance problem, not necessarily any helpful advice on what you should do to fix it. Be aware as well that there is a limit to what you can optimize. Some operations will simply take a lot of time to complete. If you have to work through a 10,000-item hash, there's no way of simplifying that process. But as you've seen, there might be ways of reducing the overhead in each case.