You probably want to look up Donald Knuth's books on Algorithms - they're the classics in computer science, and have been for years. See Wikipedia on Knuth. Problem is, they're likely out of print, and expensive. I haven't seen them in bookstores in years.
The Art of Computer Programming is still in print. You can find them on Amazon right now.
Little back story on those books: Knuth was contacted to write 'The Art of Computer Programming' sometime around 1965. It might have been later. Somewhere along the line he found that current typesetting software wasn't up to his standards so he wrote TeX just to make sure his books looked right. Come '67 or '68 the publisher called him up and asked how things were progressing. Knuth responded that he was almost done with chapter 1 of the book. The published asked how big that was. Knuth checked and informed them that Chapter 1 was almost 400 pages long.
So, they decided to make each chapter its own volume at that point.
Some more trivia: Knuth kept patching up TeX and at version 3.0 he considered it complete. He would only do bugfixes at that point, and decided that he'd add another digit of pi to the version number for each bug fix. So, that's why TeX is at version 3.141592 (or whatever it actually is right now, too lazy to look it up.)
They're pretty heavy reading if your major was MIS I suppose. He presents all discussion in an imaginary languaged called MIX which is an assembly like language. Good stuff, if you can follow it.
Really, if you can get your head around Big-O notation and spot a code section's Big-O fairly quickly you're 80% of the way there. That'll keep you from doing the vast majority of stupid things that people do when working with any significant amount of data. Little stuff like knowing with to use a HashMap for lookups instead of bombing through an ArrayList looking for the piece of data you want.
Or, knowing the difference between a mutable and immutable data storage type. It's really easy to get burned with string concatenation in this area. Hell, happened to me just a couple of years ago. I wrote a stupid routine function to take a string and re-encode it into something that could be stored in plain ASCII, then restore it back to proper Unicode when it came out. It was intended for strings that would be no longer than 255 chars, though that wasn't really in the design spec -- just an unwritten rule I had in my head. Well, once somebody started mashing 10k of data through it performance fell through the floor but we had no idea where it was. When it was found to be my little code snippet I wrote some unit tests, found that it'd take 5 minutes to encode 10k of data, optimized it, and got that down to a second. It's simple to do it the right way, but it's damned simple to do it the wrong way too.
Honestly, working as a programmer that's been doing business application for about 7 years now, efficient algorithms are the least of your worries. So long as you properly segment your code, as has been mentioned above, you'll be able to fix up any performance problems related to algorithms quickly and painlessly. If you obsess about speed all the time you just might end up taking more time to develop the software than is necessary.
I've never seen a project crippled because of one stupid simple code segment. They're easily replaced. However, bad design is almost impossible to overcome once the project is in full swing. Let's take my last major project as an example:
Platform: Java
Design: Delegate / Facade
Example:
The Web UI portion instantiates a Delegate and makes a call into that delegate which passes the call into the Facade layer, which adheres to the same interface as the Facade Security Bean, which then makes a call into the Business Management layer of the object in question which then passes that call into the Data Access Object for that object.
Yep, if you want to add one (1!) call to the data layer you had to implement the same function signature in five (5!) places. Most if it boilerplate code, but still, it was wasteful.
Why would anybody use this design? In case you wanted to distribute the load across multiple business servers and leave your web presentation layer with only a few boxes. So, in theory you could have 2 machines running the Web UI, 5 running all the business and DAO logic... and they're all hitting the same database server, which is actually where you're going to run into a choke point, but forget that, because well, this design is kinda cool. I've seen this design tried twice, and both met with the same fate: Disaster.
And it's damned near impossible to get away from once the project is in full swing.
Come to think of it, if you're gearing up for business application development, the most important thing to brush up on is going to be SQL. Poor data access code is where the vast majority of performance problems are found in my experience.