Friday, 20 September 2013

am

am


Introduction to virtualization: Abstraction is key

Posted: 20 Sep 2013 03:14 AM PDT

Virtualization technologies continue to mature and become more important in modern data centers. Rick Vanover explains one of the key concepts for beginners. 
ngnabstract.png
In today's data center virtualization is a standard practice. That doesn't mean that every IT department has moved to a virtualization technology, but it is important to have a basic understanding of what it is and how it could possibly benefit your organization. If you haven't started working with virtualization yet, this post and subsequent pieces in this series will help get you up to speed.
A good starting point for understanding virtualization is, simply put, abstraction. Virtualization exists in many forms: as hypervisors, virtual networks, virtual storage engines, virtualized applications, and more. Let's start with the hypervisor, which is arguably the most common type of virtualization practiced today.
A hypervisor will abstract physical resources from systems running on top of it. That step, right there, means  a couple of things. For one, physical resources (CPU, memory, disk and network) are shared, and there can be multiple systems on a single piece of hardware. 
For two of the latest advances in virtualization technology -- VMware's software-defined data center and Microsoft's Cloud OS -- the underlying core is the hypervisor. VMware ESXi and Microsoft Hyper-V are the two most popular Type 1 hypervisors in use today. Additional Type 1 hypervisors include KVMXenServer, and RHEV. There also are Type 2 hypervisors, which sit as a software engine within an operating system installed on a computer. This includes hypervisors like VMwareWorkstationVMwarePlayer PlusVirtualBox, and others.
In each separation, the devices presented to the virtual machines are abstracted by the hypervisor. There are situations where devices can be "passed-through" natively to the guest. This is frequently used for storage devices (LUNs, drives), I/O controllers (Fibre Channel HBAs), USB devices, and more.
Let's take a look at one example running Hyper-V. In Figure A below, you'll see a look into a server running Hyper-V. This server is a Lenovo ThinkServer running Windows Server 2012 with Hyper-V (added as a role to the server). On the left is the Hyper-V Manager showing the VMs, and on the right is the device manager view of the host showing the physical hardware.

Figure A

hyperV_FigA.jpg
One of the virtual machines, VME-RRAS, runs Windows Server 2012 as well. It's important to note that virtual machines running on a hypervisor don't need to run the same operating system, so that's great if you need to test newer and/or older operating systems. Inside of the VME-RRAS virtual machine, Figure B shows how the items in the device manager look totally different as it uses abstracted devices:

Figure B 

abstracted_devices.FigB.jpg
That's how a guest VM looks when it is a Hyper-V VM, but if that same operating system were running on VMware ESXi, the devices will appear differently. Figure C shows the same device manager categories on a VMware ESXi virtual machine:

Figure C

VMwareESXi.FigC.jpg

Categorically, each hypervisor applies a number of different technologies to present the devices to the virtual machine. VMware's vmxnet3 virtual network interface, Hyper-V's virtual Fibre Channel, and VMware's configurable video memory are all applied differently but affect the abstraction applied to the VM.
Abstraction is the starting point of all hypervisor virtualization, and from there many features can start. Once you have this under your belt; the next steps of virtualization will come naturally. What low-level questions do you have on virtualization? Share your comments below to help me steer the content for subsequent posts in the series.

Mac Mail vs. Microsoft Outlook: The dirty truth

Posted: 20 Sep 2013 03:10 AM PDT

takes a look at how Mac Mail performs next to Outlook for Mac. Which do you prefer and why? 
macmailoutlook.jpg

Sometimes, I find myself wanting to become an Apple fanboy. It would be easier, after all, within the hectic, ever-changing IT industry to just know I can trust what the manufacturer tells me. But years of technology consulting have taught me that vendors are evil.
Yes, it's true. I'm sorry you had to read it here. But that's the way it is in the real world, where the consulting firm I operate services hundreds of different commercial businesses and organizations. Vendors will promise you the world and assure you its mail client (or other product) is the best. However, your experience may differ.
Even before I began offering IT services to others, family and friends purchasing new Macs would frequently ask which email client is the best on OS X. I've always been partial to OS X Mail, which should make Apple developers happy. They've earned the accolade. The app is integrated within the OS, loads quickly, boasts a basic but attractive interface, possesses clean and well-laid elements, and proves to be easily navigable. Composing messages, replying to email, and sorting the inbox are painless tasks. Creating rules or email signatures within Mail doesn't induce knee-knocking anxiety, the way doing so might in, say, Microsoft Outlook. Mail is simple and not that complicated, and the resulting lack of complexity makes it more approachable.
Microsoft's older Entourage applications, of course, earned little popularity. Rightfully so. Many Entourage users complained of database corruption and slow performance. Microsoft wiselyreplaced Entourage with Outlook. With Outlook for Mac 2011's release, I was hopeful that a new standard was in hand. But I've been disappointed. Outlook takes longer to open (my scientifically invalid, non-double-blind testing shows Outlook requires 23 seconds to open, whereas Mail requires only five), regularly encounters synchronization delays, and often simply doesn't update my Exchange mailbox with changes as accurately or rapidly as does Mail, at least in my experience.
Ultimately, I use both Mail and Outlook for Mac, if for no other reason than to stay current with both platforms. I've configured the Macs in my home and business to connect to POP3, IMAP, and Exchange accounts, too, and I access mail, contacts, and calendars using Outlook and OS X's built-in Mail, Contacts, and Calendar. Apple's unending efforts to improve Mail, including message integration within Notification Center, iCloud reliability improvements, and Conversation views are encouraging and continue to make Mail a favorite application.
However, Mail isn't perfect.
Outlook, ultimately, gains an edge due to the clean manner in which it successfully integrates contacts and calendaring. Opening shared calendars, in particular, is easier within Outlook, in my opinion, than within Calendar. And Outlook consistently displays HTML email messages, specifically marketing messages that I've requested to receive, properly.
Mail stumbles on that front. Marketing messages that are sent by large, well-known firms you would recognize (ThinkGeek, Barnes & Noble, and NPR are a few examples) and may also receive within your inbox, regularly fail to format properly within Mail. That's frustrating.
So, it's a tradeoff. If you want the ease of use and generally acceptable performance Mail provides, you can save hundreds of dollars per Mac leveraging Mail instead of Outlook. But if you operate within an enterprise environment, you may well not have time for workarounds and simply find Outlook the best fit. But if you or your users also need Word, Excel, and/or PowerPoint, Outlook's almost certainly going to be included with the license your organization purchases, and firing up Outlook becomes a no-brainer. Just be sure to give Outlook time to open and then sync changes with Exchange before exiting the program.
Which do you prefer: Mac Mail or Outlook for Mac? Share your opinion in the discussion thread below.

SimpliVity's OmniCube: the new converged data center hardware on the block

Posted: 20 Sep 2013 03:05 AM PDT

SimpliVity joins the converged infrastructure revolution with its OmniCube product. Get the details about this virtual machine centric data center hardware. 
simplivity-logo_091713.jpg
I attended VMworld 2013 and was lucky enough to also go to Virtualization Field Day (VFD) Roundtables with SimpliVityCommVaultInfinio, and Asigra. For this article, I'll focus on the Boston-based SimpliVity, which won the VMworld Best in Show Gold Award for Storage and Backup for Virtualized Environments. In particular, I'm going to spotlight SimpliVity's OmniCube. You can watch a video of SimpliVity's VFD presentation on OmniCube, given by Jesse St. Laurent, VP of Product Strategy. 

OmniCube

OmniCube, like other converged data center hardware, has storage and compute built in to the 2U box (Figure A). It comes in three hardware models: CN-2000 for the SMB, CN-3000 for most data centers, and CN-5000 for high-performance applications. When you put several of these appliances together, they call it a federation, and you're able to mix and match models within a federation. You join a new OmniCube to a federation to expand the pool of resources. SimpliVity tries to avoid the word cluster because OmniCube is more self-aware than what is implied by the word cluster. You can have only one OmniCube in your environment.
Figure A
SimpliVityA_091613.jpg
There is what SimpliVity calls an SVT inside OmniCube. SVT is the software controller that is in charge of all the services, and it's what allows virtual machines (VMs) to access storage in several OmniCubes if necessary, taking away the need for a large, expensive storage array in the backend. SVT also makes it possible for the OmniCube to be used in an existing environment. You can present the OmniCube storage to current physical ESXi hosts and still use OmniCube storage features.

Backups and snapshots

SimpliVity argues that traditional shared storage (i.e., datastores) are not the way to go when it comes to backups and/or snapshots, so the company is practicing a more VM-centric way of doing it. Because of its tight integration with vCenter, this allows VMware admins to back up and restore the VMs they want without having to back up or snapshot the entire datastore. As is pointed out in the video presentation, though, they still are not able to easily back up all VMs associated with a certain application in an automated way.
SimpliVity also has the ability to hook into Amazon's EC2 Cloud Services, so you can back up your VMs to the cloud, allowing you to minimize any hardware purchases.

Disaster avoidance and recovery

SimpliVity has what it calls a SimpliVity move: You can take a VM in one site and move the entire thing to another site. It will unregister the VM with the original vCenter and register it with vCenter in the new site. Unfortunately, any networking cleanup and so on is not automated currently. I asked: "What is the difference between this and VMware's Site Recovery Manager?" and Mr. St. Laurent pretty much avoided my question, which leads me to believe there will be some tight integration there, but that's conjecture. 
Unlike some other vendors in this arena, SimpliVity embraces connecting its appliances over WANs and still offers one GUI for ease of management and replication. 

OmniStack

OmniStack consists of the SVT controller as well as the OmniCube Accelerator, which is a customized PCIe card that offloads compute to allow the OmniCube to preserve CPU resources for other tasks and enables inline deduplication, optimization, and compression.  
From the SimpliVity site: "The OmniStack solutions incorporate three unique core innovations that SimpliVity brings to market:
  • Virtual Resource Assimilator: A single software stack that assimilates the functionality of multiple traditional IT infrastructure products into a single shared x86 resource pool.
  • Data Virtualization Engine: A novel data architecture where all data is compressed, deduplicated, and optimized at inception, inline with no impact to application performance.
  • Global Federated Architecture: An intelligent network of collaborative systems that provide massive scale-out as well as VM-centric single point management."

What’s better than creating your own DDoS? Renting one

Posted: 20 Sep 2013 02:59 AM PDT

Thanks to the cloud, anyone can now initiate a DDoS attack. Find out how booter services work. 
ddosattack1.jpg
Interested in denying someone access to the Internet? Ten dollars provides a very nice DDoS (Distributed Denial of Service) platform, featuring one 60-second long attack that can be used as often as needed for an entire month. For those wanting more, 169 dollars provides the ultimate DDoS, three two-hour long attacks, also rentable by the month.
Bewildered by all the different suppliers? This forum reviewed the major cloud-based DDoS platforms, coming up with these favorites. 
top10Booters 2.jpg
Notice the slide's title refers to Booters; the industry calls for-hire DDoS attacks booters when they have an online customer interface. The slide also refers to stressers [sic]. That's an attempt to align with legitimate businesses that stress-test websites on how well they handle large volumes of incoming traffic. 
I first became aware of booters when my friend and security blogger, Brian Krebs, reported inthis post that someone initiated a Booter DDoS attack against his blog site. After reading Brian's post, I realized DDoS attacks were no longer just in the realm of experienced and knowledgeable hackers. For a nominal fee, anyone can easily wreak havoc on someone else's Internet experience.
Karami.Booters 3.jpg
Wanting to learn more, I did some digging: coming across an interesting paper by Mohammad Karami (top picture) and Damon McCoy of George Mason University, "Understanding the Emerging Threat of DDoS-As-a-Service."
Mccoy.Booters 4.jpg
Mohammad and Damon start out by mentioning that researchers know little about the operation, effectiveness, and economics of Booters. A fortunate event changed that. It seems the operations database for one specific Booter — twBooter — became public, allowing Mohammad and Damon to gain significant insight into the inner workings, including:
  • The attack infrastructure
  • Details on service subscribers
  • Information on the targets
In an interesting departure from typical DDoS operations, Mohammad and Damon noticed Booter developers prefer to rent servers instead of compromising individual PCs: "Compared to clients, servers utilized for this purpose could be much more effective as they typically have much higher computational and bandwidth capacities, making them more capable of starving bandwidth or other resources of a targeted system."
Next, Mohammad and Damon were able to piece together twBooter's two main components: the attack infrastructure and the user interface (shown below).
twBooters 5.jpg
The user interface slide has a window showing the different available attack techniques. Using the database, Mohammad and Damon isolated the most popular attacks:
[T]wBooter employs a broad range of different techniques for performing DDoS attacks. This includes generic attack types such as SYN flood, UDP flood, and amplification attacks; HTTP-based attacks including HTTP POST/GET/HEAD and RUDY (R-U-Dead-Yet); and application-specific attacks, such as slowloris, that targets Apache web servers with a specific misconfiguration.
The gentlemen mentioned the above DDoS techniques accounted for more than 90 percent of the twBooter attacks. To determine the effectiveness of twBooter, Mohammad and Damon subscribed to twBooter, and set about attacking their own server. First up, the UDP attack: "The UDP flood used a DNS reflection and amplification attack to generate 827 MBit/sec of DNS query response traffic directed at our server by sending out large numbers of forged DNS request queries that included our server's IP address as the IP source address."
Next, the SYN attack: "For the SYN flood, we observed 93,750 TCP SYN requests per second with randomly spoofed IP addresses and port numbers directed at our server in an attempt to utilize all of its memory by forcing it to allocate memory for a huge number of half-open TCP connections."
The following slide provides details.
table.Booters 6.jpg
To recap, twBooter exemplifies the new trend in DDoS platforms: a reasonably-priced, user-friendly DDoS platform fully capable of bringing down websites, even those with significant bandwidth accommodations.

Something else I found interesting, even though twBooter did not make the Top 10 (maybe the data leak had something to do with it), Mohammad and Damon determined twBooter earned its owners in excess of 7,000 dollars a month. That amount resulted from customers launching over 48,000 DDoS attacks against 11,000 separate victims.

Final thoughts

Oddly enough, booters started out filling a niche, one that allowed online gamers to momentarily knock opponents out of the game, gaining themselves a distinct, albeit unfair, advantage. Other enterprising underworld individuals decided to repurpose booters into powerful DDoS platforms for hire — simple, yet effective.

Troubleshoot Outlook connectivity with these quick tips

Posted: 20 Sep 2013 02:49 AM PDT

When Outlook won't connect to the Exchange server, follow these steps before calling IT for help. 
vortex-email-thumb-091613.jpg
Microsoft Outlook is often rendered useless because it cannot connect to its Exchange server. Sometimes troubleshooting the issue is as simple as closing Outlook and restarting. In other instances, troubleshooting is much more challenging... or so it seems.
The following troubleshooting tips make solving that connectivity loss a snap. These instructions don't require a computer science degree to understand them, so just about anyone should be able to get Outlook re-connected to their Exchange server. We'll start with the simplest tip and increase the difficulty as we go along.

Uncheck offline mode

Oftentimes when a client calls and says, "My email won't work!" I find that Outlook was somehow set to offline mode. If you're using Outlook 2007 or earlier, click the File menu. If there is a check mark next to Work Offline, uncheck it, and you should be good to go.
If you're using Outlook 2010 or higher, follow these steps:
  1. Click the Send/Receive tab.
  2. Locate the Work Offline button.
  3. Click the Offline button.
At the bottom of your Outlook window, you should see Trying To Connect.... If it connects, your problem is resolved; if not, move on to the next solution.

Restart

You should restart Outlook and, if that fails, restart your computer. I cannot tell you how many times I've seen Outlook connectivity issues resolve with a simple restart. The issue could be caused by the computer having connectivity issues. If you open your web browser and cannot reach a website or internal resources, that's most likely the problem. 
If Outlook still cannot connect and you cannot reach any websites or internal resources, contact your IT department because you have a networking issue. Once that is solved, Outlook will be fine.

Rebuild

Outlook can use two types of data files (.pst and .ost), and both are susceptible to errors that can cause connectivity problems. Here's how I handle this:
  1. Close Outlook.
  2. Open the Control Panel.
  3. Locate the Mail icon (depending on how Windows Explorer is set up, you might have to click the Users section to find the Mail icon).
  4. In the resulting window, click Data Files.
  5. Select your data file from the list and click Open File Location (Figure A).
  6. Locate the data file in question (it will probably have the same name as your email address).
  7. If the file has the extension .ost, rename the extension to .OLD. If the file has the extension .pst, do nothing at this time.
  8. Close these windows and open Outlook.
Figure A
outlook_fig1_091613.png
This window will list all data files in use with Outlook.
Note: You need to be able to see file extensions in order to know if your data file is a .pst or .ost. This is handled through Windows Explorer settings.
If your data file is a .pst, follow these steps to run Scanpst on the file:
  1. Search for scanpst.exe through Windows Explorer.
  2. After you locate the file (e.g., a location could be C:\Program Files\Microsoft Office\Office14\), double click to run the application.
  3. From the resulting window, click Browse (Figure B).
  4. Locate your .pst file.
  5. Click Start.
Figure B
outlook_fig2_091613.png
If you've run Scanpst on your data file before, the location will already be in the field.
Scanpst will run eight passes over the data file; depending on the size of your data file, this can take quite awhile. If Scanpst finds errors in the data file, it will prompt you to click the Repair button. You should also check the box for Make Backup Of Scanned File Before Repairing in case something goes awry.
After the repair is complete, close Scanpst and re-open Outlook. If Outlook still cannot connect, move on to the next tip.

Repair install

You can run a repair installation of Microsoft Office; this will solve problems that standard fixes cannot repair. To do this, follow these steps:
  1. Open the Control Panel.
  2. Click Programs and Features.
  3. Locate the entry for your Microsoft Office installation and select it.
  4. Click Change.
  5. Select Repair from the resulting window.
  6. Click Continue.
  7. Allow the repair to complete.
  8. Reboot your computer.
After your computer has rebooted, start Outlook and hope for the best.

Recreate your profile

When all else fails, you can recreate your Outlook profile. I prefer to create a new profile (without deleting the old one) -- just in case. In order to recreate your profile, you need to know your account setting, so you should have that information before you begin. Here's how to create a new profile:
  1. Open the Control Panel.
  2. Open Mail.
  3. Click Show Profiles.
  4. Click Add (Figure C).
  5. Give the profile a name.
  6. Walk through the Outlook account setup wizard.
  7. Once the profile is known to work, you'll either want to set that profile up as the default or delete the old profile.
Figure C
outlook_fig3_091613.png
The Outlook profile manager.
If after all of these steps Outlook is still unable to connect, it's time to call the IT department. It could be a DNS issue, an Exchange issue, or a number of other possibilities that are outside the scope of this article.

No comments:

Post a Comment