In Black & White

Thin Clients versus Fat Clients Explained

Written by Waldo Nell | Apr 7, 2016 11:23:03 PM

Has it ever seemed like the IT folks spends their days making up arcane terms and acronyms? Every time you turn around there is some new three letter acronym (obligatory acronym: "TLA") or concept being discussed. It can be enough to cause the finance officer to let it all go in one ear and out the other.

But here are two important concepts you should know something about: fat clients and thin clients. In this Tech for Execs post we are going to explain these terms to ensure that you have a high-level outline of their meaning and the potential value they provide your organization.

Why bother with these particular terms? After all, they are hardly new technology and you are likely not hearing about them as often as the latest buzz words like "cloud computing".  

The reasons to care are three-fold:

  • it is very likely your IT department is using or has recommended "thin client computing" to you (for good reason!).
  • in many circumstances it can significantly reduce IT costs while increasing end-user performance.
  • large CaseWare Working Papers files will almost certainly run MUCH faster in a Thin Client model.

The Scenario

You work for a government / university / corporation with many employees.  It is very likely that most of your users have somewhat similar needs in terms of applications and files.  All users would most likely want access to Microsoft Word and Excel, your organizations' line of business application(s), your accounting software if you are in the finance department and so on.  Further, you have files that need to be accessed by multiple individuals, perhaps simultaneously, perhaps from many different physical locations. 

How does your organization deal with these requirements?

There are numerous variations and exceptions with more innovations coming all the time. That being said, there are currently two predominant approaches that can be taken to meeting these needs.

1) Fat Client (Traditional) Model - only data resides on servers

How it works:

In the traditional approach, one simply provides each user with their own computer, install Microsoft Word, Excel, Outlook, etc. on each one and let everyone work independently.

You start your computer and arrive at your desktop. Then you commence running your applications (Excel, Word etc.)  which are using your computer's resources (processor, memory, storage etc.).  If you have no network connection, you can keep on working with all the files that you have saved on the local hard drive.

Things get more interesting when your applications need to use data off a server. When data is being accessed off the server, the following occurs:

  1. Workstation makes a request to server - "Please send me the data in this large excel file that has been saved on the X drive"
  2. Server pushes all the data (and if it is a big spreadsheet it might be a LOT of data) across the network cable to your workstation.
  3. You commence working on the spreadsheet; add rows, insert sub-totals, recalculate etc. This work is being done on your local machine but all saving or requests for more data are all going across your network cable. Again, when dealing with a big file, this can be very large quantities of data and therefore slow down the performance of the application.

Pros & Cons:

On the upside:

  1. Other users' activities on their computers have minimal effect on your performance.
  2. Minimal reliance on complex, expensive servers in this model. Typical use of servers in this environment is for file storage, checking & maintaining passwords and perhaps hosting a database.
  3. Users can be "self-sufficient" - as long as they have a laptop and the working data they need - when they go home on the weekend or if they travel to conference.

The downside:

  1. Every computer needs to be maintained. They must be updated for security and bug fixes as well as the updates of each and every software program
  2. Each computer's hardware must be maintained at levels acceptable for the software applications they are going to be using. As the software programs require more resources, you must upgrade each and every computer that will use that program.
  3. If any data is stored on the local computer, it then needs to be backed up to protect the organization's data.
  4. If a new application is needed, it is likely that it has to be installed on multiple computers.
  5. Because each workstation brings all the data across the network cable to be worked on locally for each user, there is often a tremendous amount of network traffic. In a modern network this may not be an issue but if there are very large quantities of data or there are multiple physical locations that must communicate, the network bandwidth capacity may not allow for quick transmission of all the data required.
  6. Because of points 1 & 2 above the organization is committed to continuous investment in each individual workstation to ensure the hardware is capable of running the current version of the software. 

This is where the second approach comes to the rescue.   

2)  Thin Client (Virtual) Model - data & applications on servers

How it works:

In this model, instead of using multiple powerful workstations & laptops to run the application, all that work is done by one or more servers. Instead, end users are given terminals, sometimes called "dumb-terminals". Essentially all they do is present the user with a picture of a computer interface on their monitor and accept the users' input (mouse clicks and keyboard typing), sending it along to the the server for processing. All data, programs, and processing remains on the server. In many ways this model is a return to the mainframe model of the 1950s.  

There are two primary technologies to discuss that drive this model, one provided by Microsoft ( "Remote Desktop Server" or RDS) & another provided by Citrix (XenDesktop). If you want/need to participate in a conversation with IT more fully about these technologies, you can find the details here. 

Pros & Cons:

The benefits of Thin Client computing are many:

  1. "Dumb Terminals" need little to no maintenance. Only the servers need to be upgraded & maintained.
  2. Only the server's hardware must be maintained at levels acceptable for the software applications and the number of users (more users = more horsepower required for servers). 
  3. No data can be stored on the local computer. Thus data is better controlled and backed-up.
  4. If a new application is needed, it must only be installed once - on the server.
  5. Thin Client software is long established and well know in the IT community.
  6. Network traffic is made more predictable / even. Potentially depending on the data being presented to the remote user, network traffic may even be reduced. Thus even with minimal network bandwidth capacity end users may not experience any slowness. Especially when remote access across the Internet / VPN is required, this may be the biggest benefit of all. 
  7. Because of points 1, 2, 3, & 4 the overall result is often much lower total cost of ownership for the organization versus the traditional model.

But there are some concerns & drawbacks:

  1. If users need to deal with a lot of video & audio (especially creation & editing) the centralization of thin client computing can tax the servers tremendously resulting in poor performance.
  2. The servers (their software and setup) are more complex & expensive than the servers are in the traditional model. 
  3. The downside of centralization is a single point of failure. Should your application server fail, all your users will be affected. To mitigate this risk, IT must build in redundancy which offsets some (not all) of the savings.
  4. Users might be tied to a terminal and not able to move around the office or off the network to perform any of their work. To be clear, your IT department can provide remote access to the RDS / Citrix server, but they will often be very concerned about increasing security risks if they do so.

Impact on CaseWare:

Very large CaseWare Working Papers files used in a traditional model (data file on the server and application on your workstation) can be very slow, especially when network performance issues get in the way. There are some recommendations to maximize your CaseWare performance in this model but you might benefit by changing models.

Moving to a Thin Client model with CaseWare can yield major performance improvements. To note though, due to the internal architecture of CaseWare Working Papers, the closer the Working Paper file is to the actual application, the better the performance.  And by close we mean not physical distance, but close in access time.  The best performance will be obtained if the Working Papers file is stored on the same RDS/Citrix server which hosts the CaseWare Working Papers application.  

If your organization is not currently using one of these thin client solutions, and your only concern is CaseWare performance, we also strongly suggest considering CaseWare SmartSync

To get more tech tips for executives, be sure to sign up for our blog In Black & White.