In Black & White

Winning Strategies for Accountants

What is your Bitcoin strategy? Accept / Invest / Avoid

What is your Bitcoin strategy? Accept / Invest / Avoid

  • Waldo Nell
  • 14 November, 2018
  • Tech for Execs
  • minute(s)Most people have heard of “Bitcoin”. It was all over the news about a year ago with massive increases in value, which lead some to ponder if Bitcoin would become the currency of the future. Much has happened with Bitcoin in the last year (both positive and negative) and we thought it was time to consider the opportunities of Bitcoin from a Finance Department perspective. As you all likely know, Bitcoin (BTC) is one of many types of cryptocurrency. Other well known cryptocurrencies include Litecoin (LTC) and Ethereum (ETH). Bitcoin is the oldest and most well-known of the cryptocurrencies. It has been around for almost a decade, but only recently began to experience significant increases in value. This increase lead organizations around the globe to wonder if / how they should take advantage. 1) Should your organization accept Bitcoin? For most finance departments, there are 3 reasons that they should not rush to accept Bitcoin (or any digital currency): Low Demand - There is little consumer demand to pay with Bitcoin. This is driven by significant appreciation hopes by those that hold Bitcoin, and high transaction fees.  Perceived Legal Risk - Digital currencies are treated as 'commodities' not currencies. This can lead to difficulties with your legal options in the event of theft.  Increased Administration - The steps required to accept digital currencies are not the same as accepting traditional currencies. While straightforward, it is one more barrier to moving forward. Further, given price fluctuations, (discussed below) you may want to quickly convert from Bitcoin to cash. This is an additional administrative task that likely further discourages adoption. The result is Bitcoin Acceptance is Low and Getting Lower. I'm not certain at this date what would compel Finance to prioritize its acceptance.  2) Should finance invest in Bitcoin? The answer for nearly all finance departments is holding significant amounts of Bitcoin today is too risky. In just the last few years (from Q3 2016) Bitcoin value began growing exponentially. In Q2 2016 a single bitcoin was valued at approximately $590. Approximately six months later in Q1 2017, the value jumped to $1,400, and in Q4 2017 it peaked at $25,000, before falling to today’s value of $8,200 (CAD).  This exponential growth followed by a rapid decline in value highlights the volatility and uncertainty inherent in this modern currency. This volatility likely makes significant holdings by finance in Bitcoin unwise. Some look to Bitcoin futures to bring Bitcoin into the mainstream financial markets. On the prospect of these futures, JP Morgan set their Bitcoin evaluation to "Bullish" as an investment asset. Some analysts believe that the recent advent of these Bitcoin Futures contributed to the major valuation gains that were seen in late 2017. Largely however, this futures market has been disappointing. “Institutional players have stayed on the Bitcoin sidelines, and as long as they are, the futures contracts are likely not to generate substantial amounts of volume.”   Craig Pirrong, a finance professor at the University of Houston The future of Bitcoin and its underlying blockchain is definitely something to keep an eye on, but given all the above finance should view Bitcoin like any very risky investment; only invest amounts that you can afford to lose. 
12 months ago, Bitcoin was all over the news with meteoric valuation gains. The last year has seen the value drop considerably. What should your finance department do in 2019 with Bitcoin? We answer the two biggest questions.
READ MORE
Windows Security Basics for the Finance Professional

Windows Security Basics for the Finance Professional

  • Waldo Nell
  • 17 January, 2018
  • Tech for Execs
  • minute(s)Finance professionals interact with Windows security every day when they provide a username and password to login to their computer/network. Another common interaction occurs when they want to install a new application and are stopped and forced to call IT. Finance professionals can struggle when security prevents them from accomplishing necessary tasks. As we have argued elsewhere, it is useful to have a bit of insight into how Information Technology works so you can either: solve the problem yourself or better communicate with IT. Just as we did with passwords, we also hope to provide finance professionals in small organizations with a better perspective on how their IT department (often outsourced to consultants) ought to leverage File Access Permissions to help mitigate risk. There are many different kinds of security involved on a typical computer helping protect you or your organization from problems (malware infecting your network, unauthorized access to company files, etc.). One such security mechanism is called File Access Permissions. Please note that this topic is actually complex and varies by specific operating system. Thus to keep it brief we will simplify some topics (while being largely accurate) and assume a recent Windows operating system. Access Control Lists and Permissions Each file, folder, and network share have an associated Access Control List (ACL). An ACL is a list of users/groups that have specified access rights to the file/folder/share dictating what they can do with it (called Permissions). Common permissions include Read and Write. ACL entries might look like this: File: Inventory2017.xlsx User Bob: read permission User Mary: read/write permission Assuming no other rights are granted elsewhere, user Bob has permissions to read the file, but only user Mary is allowed to both read the file and make changes to it. If user Bob tries to save the file, he will get an error. In a well-designed network, ACLs are usually defined a bit differently as the above example is quite brittle and hard to maintain. If you had to assign access rights file by file imagine how much work this would be and how easy it would be to make a mistake. Active Directory, Security Groups, Files and Folders In a typical Windows-based network, information on all employees, consultants, and computers are stored in a central directory called Active Directory (AD). As you can see from the image below, it is nothing mysterious. It is a hierarchical tree (like an organizational chart) grouping users and storing details such as your login name, password, email address, etc. The grouping feature is valuable. The idea is simple - a security group is a collection of people that share similar access rights. For instance, HR staff may need access to specific files while Accounting may need access to different files. By creating an HR group and an Accounting group, and adding all HR staff to the HR group and Accounting staff to their group, the IT administrator can now apply security policies based on groups, and not individual users. When an HR employee leaves the company, and a new one is hired, the employee only need be added to the right group and all access permissions will automatically apply to this user. Thus, a well-designed ACL list will leverage these groups and might look like this: Folder: E:\data\human resources Group HR Staff: read permissions Group HR Managers: read/write permissions Two benefits of the above approach: The ACL is applied to the whole folder, ensuring that all files and folders stored in that folder have the same permissions. New files will automatically share the same permissions as the folder. Permissions are assigned to groups. Thus anyone belonging to the HR Staff group to have read-only access to the files and folders, and HR Management to have full access. This reduces the workload on IT and ensures consistency. Network Shares and Conflicting Rights In addition to file and folder level ACL entries, a network share can have a separate set of permissions. In the example above, assume the location is shared on the network under the name "human resources", then the share itself can have an ACL associated with it similar to the following: Share: \\server\data\human resources Group Everyone: read If the ACL on the share were configured as above, nobody would be able to write to that folder. Both the share and the file/folder based ACL need to allow access. They are defined in two different locations and are not related to each other in any way. The best way to think of this is the most restrictive permission set wins. One other permission that you should know of besides Read, and Write is Execute. A program file requires you to have the Execute permission before you may launch it. Attributes Lastly, files and folders may have certain attributes. The Read Only attribute is significant to end users as this flag may override any write permissions set on the file. If a file is marked as Read Only, the user will be unable to modify it even if they have the write permission in the ACL and the shared folder permissions that we discussed above. This flag can usually be removed by the end user as long as the user has write access to the file. Once the flag has been removed, the file can be written to assuming the user has write access based on the ACL set. Security Warnings on Downloaded Files One last issue you may run in to from time to time has to do with files downloaded from the internet. If you are using Windows 7 or later, your computer keeps track of the source of the file and will protect you from files that originated outside of your organization. Windows uses a feature called "Alternative Data Streams" (ADS) to remember which files originated from external network sources. When Windows detects you trying to open one of these files, it will warn you. If your IT department allows you, Windows will ask for your explicit permission to open it. However, your IT department can set up your permissions so that you do not have any choice and are simply blocked from opening these file In conclusion, then - accessing files and folders in Windows is broken down into several layers of protection: 1. File/Folder based ACL assigned to Groups/Users 2. Network Share-based ACL assigned to Groups/Users 3. File-based Attributes such as Read Only 4. Downloaded file based ADS blocking access All four layers need to grant you access before you can work with a file. Also, access might be partial such as read-only or read/write only but not delete. What you should try if you have access issues: To check whether you have access to a file or folder, open File Explorer and navigate to the file/folder in question, right click and select Properties. 1. For file-based ACL rights, go to the Security tab. Click on your name or the AD group you belong to (you may need to ask IT if you do not know this) and check your permissions. 2. For network share-based ACL, locate the mapped network drive, right-click the drive and select Properties. The Security tab will show the ACL for the network share. 3. For file-based Attributes, review the General tab and check if Read-only is ticked. 4. To see if a downloaded file is being blocked (assuming you have write access to the file) right click it, select Properties and then unblock it at the bottom of the Properties window.  
Have you been denied access by Windows to a file you need? Here are the Windows Security basics to aid finance in fixing the issue or communicating with IT.
READ MORE
What You NEED To Know About Password Security

What You NEED To Know About Password Security

  • Waldo Nell
  • 11 July, 2017
  • Tech for Execs
  • minute(s)As we have discussed previously, a little knowledge about technology & security can go a long way to mitigating risk. That is especially true of one very important and fundamental topic: passwords. There are three basic methods to authenticate yourself to a 3rd party (e.g. a website, an application, or your network): What you have refers to something in your physical possession - a key, a phone, an access card. What you are usually refers to biometrics. Items like your fingerprint, your retina, voiceprint, DNA etc. What you know typically means a password but could also refer to security questions like, "What is your mother's maiden name?" This article considers two important aspects related to the use of passwords in the modern day: 1. How secure is your password? 2. What can you do to improve your digital security? Most corporate systems today still ignore the first two authentication methods. They are protected by the venerable password - a combination of letters, digits and symbols that act as the key to unlock some digital asset. The strength of the password system relies on the assumption that you have chosen some combination of characters that can be easily remembered by you, but not easily guessed by a bad guy trying to access your asset. We use the term “asset” since passwords are used to protect various different things such as online banking and email accounts, social media accounts, the login to your office PC, documents that are password protected, the PIN on your bank card and so on. How secure is your password? Stop and think about the password you used to log in to your own computer today.  Most systems have some basic rules like "At least 8 characters long with at least one upper case letter, at least one lower case letter, and at least one number."  Your password might look a lot like "passw0rd". (Don't be embarrassed, it's a very common password.) If I wanted to break into your computer, how could I guess that password?  There are several ways to approach this problem. Let’s consider just one approach: a brute force attack. The computer program we will use starts based on a specific character set. Let's simplify and assume the character set consists of all lowercase and uppercase letters and digits. So we have a-z, A-Z and 0-9. The program will start its first guess by trying “a” as your password. It will fail and the computer will try “b” and so on until it gets to “z”. It will then try “A”, “B”, …”Z”, “0”, “1”, … “9”. It will then move on to “aa”, “ab”, “ac” and so on until it gets to “99999999”.  A simple calculation shows that the program has to guess (26 + 26 + 10)8 = 218,340,105,584,896 combinations of characters to try all of the possible 8-character passwords. 218 trillion combinations! It sounds so large as to be impenetrable. An average office PC from 2005 would take about 170 years to break this password. But on a high-end desktop computer of today, such a password can be cracked within an hour. You may be wondering how bad guys can make so many guesses in an hour without getting detected? The trick is that bad guys rarely directly try to log in as you. Instead, they exploit system vulnerabilities and download large lists of scrambled passwords for thousands/millions of accounts. Once they have the list on their local system, they can try to guess the passwords (using brute force and other methods) at their leisure. Once they have worked out what the passwords are, they then try to log in using the compromised credentials. In real life, bad guys very rarely revert to brute force attacks for longer passwords. Instead, they build up huge word lists consisting of previously cracked passwords from breaches such as Yahoo, LinkedIn, DropBox, Ashley Madison and so on. Since many people reuse passwords across sites, these lists allow bad guys to quickly crack passwords on many sites. What can you do to improve your digital security? Security is hard to get right. The best we can do at this time is to make use of something called defense in depth. The general idea is to not rely on one single measure to protect you but to add multiple layers that in combination dramatically mitigate overall risk. Here are 7 recommendations designed to do just that: Use better passwords - Never use the names of your children, birth dates or anything personal in your passwords. Pick a random 12 character or longer password which mixes lower case, upper case, and digits. Doing this one step would increase the time to crack the password from under an hour for a random 8 character password, to nearly 41,000 YEARS for a 12 character password! Add special characters (%,$,# ) to increase it even more. Be unique, do not share & do not write - Furthermore, do not share your passwords. Never reuse passwords between accounts because cracking one password gives access to many other assets. Do not write your passwords down unless you can store them where they you can guarantee their physical security. Increase the length of your passwords every 3 years - Make sure to revisit your passwords every two to three years. Remember, as long as Moore’s law* holds, passwords that are considered secure today will become weaker in the future. A general rule is to increase the length of your password by 1 character every 3 years Use password management software - Most of us have dozens of accounts, and remembering dozens of different, random passwords is near impossible. Fortunately, there is a solution to this dilemma. Password Managers are applications that you install on your computer/phone, that will remember and manage your passwords for all the various sites you frequent. Your passwords are stored in a secure, encrypted vault which you protect with a single master password (one that is long and hard to guess but easy for you to remember). This one master password protects all the other random passwords. Examples of good password managers are 1Password and LastPass. Two Factor Authentication - Fortunately, many sites & software today allow you to set up something called 2FA (Two Factor Authentication). In addition to providing your password when you try to log on to a service, a short number is sent to your phone by the site as a challenge for you to repeat. By typing in the correct code, you verify what you know (password) as well as what you have (ownership of your phone). It is much harder in general for a bad guy to both know your password and have control over your phone. If your service provides this feature - enable it. It usually costs nothing extra to enable Use fake answers to security questions - Many sites require you to provide one or more security answers in case you lost your password and have to reset it. Providing real answers significantly weakens the security of the account, as it is trivial in today's connected world for a bad guy to scour Facebook / Twitter / Google and find out what your mother's maiden name is or the name of your high school.  Best is to use pronounceable but obscure, false answers and store them in your password manager. Restrict remote logins - If practical, have IT prevent remote logins by default for all users. In other words, all users must be in one of your offices to access your systems. As most folks won't want to risk sauntering into your office and sitting at your computer, this improves your security considerably. For those users that do work remotely, have IT limit access for those users to specific authorized locations (IP addresses), or make use of a VPN. *Back in 1975, a guy by the name of Gordon Moore (working at Intel), predicted that the number of transistors in an integrated circuit would double every two years. This translates loosely to a doubling of processing power once every two years, also known as exponential growth. This prediction has held true for four decades, and the implication on our digital security is significant. The computer you buy in two years can crack longer passwords in less time than the one you have today.
Finance officers rely on technology & trust their passwords to provide security for confidential data. These 7 tips ensure your passwords are up to the task.
READ MORE
Spreadsheets vs. Databases: How to weigh the Tech Benefits

Spreadsheets vs. Databases: How to weigh the Tech Benefits

  • Jamie Black
  • 15 June, 2017
  • Tech for Execs
  • minute(s)No doubt you have read many articles decrying the use of spreadsheets due to the myriad of disasters that have resulted from their use. We have a long history of ranting about their weaknesses when it comes to complex reporting tasks like generating a set of financial statements/CAFR. We have even written articles about how to avoid these mistakes, and about how to determine (calculate) if a spreadsheet is the right tool for a given task. What we have never done is present the technical reasons that spreadsheets are ill-suited for the job and why database-driven applications are far superior. That is what we will do in this article. Before we dive in, we should state a couple of items (before the hate mail comes pouring in): There are many spreadsheet applications & database-driven tools. Clearly, we cannot discuss each one and their differences in detail. Instead, we will speak generally about typical, common features found in products like Microsoft Excel/Google Sheets and Microsoft SQL Server. Spreadsheet applications have gotten increasingly sophisticated, have remained inexpensive and easy to use, which has naturally lead to their use for more complex tasks. If left to choose between using a spreadsheet or paper & pen, we would recommend the spreadsheet. Some of the problems people have with spreadsheets occur because they fail to follow best practices. Thus, some of the highly publicized disasters reflect a skill/experience problem, not a software problem. We are active proponents that just because you CAN use a tool for a task, does not mean that it is the BEST tool for the job. Similarities Spreadsheets provide a rapid, powerful, generic way to manipulate numbers in any form or fashion your heart desires. The intersection of columns and rows allows you to enter data or formulas to generate a wide range of results.  A database-driven application extends the concept of a spreadsheet.  A very rough comparison is that each “tab” in a spreadsheet workbook is equivalent to a database table, with each table consisting of rows and columns. A database is a collection of these tables and other metadata, just like a spreadsheet is a compilation of sheets. To be clear, we are using the term database here as it is the common expression.  Differences   Spreadsheet Database Driven Application Data (Transactions) User(s) performs data entry on the same spreadsheet that houses all the data.  Two major approaches, both of which prevent the user from directly affecting previously recorded data: The user completes a Form & the system validates the data and if acceptable, adds it to the appropriate Table(s).  The user imports large volumes of data. This data is validated to ensure acceptability, and it is added to appropriate Table(s). Calculation(s) User(s) adds calculations/formulas to Cells that reside with the raw data. Calculations are performed as needed, either on a Report or Form. Formatting Like calculations, formatting is usually applied by the user to a selection of Cells.  Formatting is applied to the Report that pulls data from one or more Tables. Validation When used (infrequently in our experience) user(s) adds to a selection of Cells. Applied centrally to all actions involving data (importing, deleting, adding new records, etc.). Data Integrity No certainty that data added was added correctly & completely. Generally speaking, considerable technology is dedicated to ensuring integrity. For more details see this article on ACID. Scalability Has improved over time but is not designed to handle massive volumes of data. Depending on the particular database technology selected, capable of handling millions of records. Audit/Logging Some support with major gaps. Secure logging of essentially any & every item determined valuable is possible. Security Can be set at the Cell level but roles & groups are not supported. Extensive ability to assign roles & groups and specify permissions at any level.   Spreadsheet weaknesses The argument for continuing to use spreadsheets is obvious - they are inexpensive and easy to use at a basic level.  The problem with them is considerable however and can be seen in a common theme running through the comparison above. The major technological problem with the spreadsheet is the collapse of many elements into one sheet or even down into one cell. In a spreadsheet, the "data table" IS the report. Entering new "transactions" means being able to edit existing rows. This makes the large, complex spreadsheet very fragile. Make a mistake entering a formula and everything breaks. It is like a tower of Jenga blocks waiting to fall. Some of you may be typing a furious email to me right now saying "You can solve these issues in a spreadsheet. Just design it like a database and use all of its features, like macros." It is true, some of these issues can be handled IF you design your spreadsheet correctly and have extensive IT and programming skills; essentially, if you build an application inside your spreadsheet. Thus the argument for continuing use of spreadsheets for complex tasks is not compelling for 3 reasons: Untrained operators - It is great to say "develop the spreadsheet properly" but consider the typical spreadsheet developer. They are often accountants/finance professionals. Because anyone can use spreadsheets & they are so easy, people with no particular development training build incredibly complex documents. Consequently, these documents rarely conform to the best practices that an experienced developer would insist upon.  In general, errors seem to occur in a few percent of all [spreadsheet] cells, ...In programming, we have learned to follow strict development disciplines to eliminate most errors. Surveys of spreadsheet developers indicate that spreadsheet creation, in contrast, is informal, and few organizations have comprehensive policies for spreadsheet development.  Ray Panko - What We Know About Spreadsheet Errors Too little control - Spreadsheet tools are just not designed to enforce or in some cases even support best practices. One example of this is that many elements are combined/mixed together within each spreadsheet, sometimes even within one cell. Recall, spreadsheets are a generic tool designed to fit as many use-cases as possible. Thus, the user must: Know they need a given feature (like validation), Have the skills (which might include programming) to build it, Spend the time to build it, Technical Inability - Even if you have tremendous IT skills some actions are very difficult bordering on impossible within a spreadsheet: Managing large data sets - if you need to deal with millions of records, spreadsheets can not help you.  Logging - It can be very difficult to determine which user added row 7 in our example. Or if you are dealing with thousands of rows, which rows were added and when? Documentation - if the person who created the spreadsheet leaves the organization, who else in the organization will know how to use what is essentially a custom application? There is no easy way to direct subsequent users where to start etc. other than some notes that you hope are updated and then followed by subsequent users. Database weaknesses All the power, validation and speed of a database driven solution comes at a cost. Depending on the exact database selected, the costs can be significantly higher than a spreadsheet and specialized hardware may be required. Further installing, configuring and maintaining a database takes specialized knowledge. Finally, one of the benefits of the database-driven application is one of the challenges- it constrains the user. In contrast to the Jenga metaphor of the spreadsheet, the database application can be thought of like a Rubiks Cube. The user can only do certain things in certain ways. Unlike the spreadsheet, there are rules about how the system can be used. The Bottom Line We have previously developed a detailed calculation to determine exactly when to use a spreadsheet but if you want to keep it simple, here is the cheat sheet for choosing between the two tools: Choose a spreadsheet (and follow best practices in how you build and use it) if the task is: simple, only used by one or two people, using a small set of data which will only be used once.  Invest in a database driven application if the task is: complex, used by multiple people, standardized, may house a moderate to large data set and  reoccurs.
When & why should you upgrade from a spreadsheet to a database-driven application? The technological differences are significant & shouldn't be overlooked.
READ MORE
5 Best Practices for Updating Critical Software

5 Best Practices for Updating Critical Software

  • Waldo Nell
  • 17 October, 2016
  • Tech for Execs
  • minute(s)Finance officers, accountants and auditors are expert in a very specific field of knowledge. Generally speaking that does not include great depth related to IT systems and best practices. Yet finance officers, accountants and auditors rely on software (and hardware) to allow them to complete their work, often under very tight deadlines. This article will layout the case for finance involvement in the process and provides 5 key steps for finance.  Running with Scissors Almost certainly all of your critical applications (ERP, budget, HR, financial reporting, payroll etc.) get updated from time to time. For many of our clients, finance leave this process entirely in the hands of IT. Often finance does not even know updates have occurred until they come in Monday morning and detect that something looks different. This is a very dangerous approach. Updates are risky. Kind of like running with scissors. You may have done it a hundred times with no problems, but one little misstep and it can be disaster.  You may appreciate this if you ever lived through a failed update. To give you a little more control and allow you to better mitigate this risk, we recommend our clients follow a 5 step process to manage their updates.  1) Work with IT - Make sure IT knows that they need to include finance in all critical update processes. This does not mean sending an email to finance when the job is done. It does mean advising finance before anything happens (ideally when updates are first made available) so you can work collaboratively to ensure everything works well.  2) Know the benefits - Ensure you have a clear idea of what the benefit to the organization of the upgrade will be. Amazing new features that you have been dying for? Major bug fixes that will prevent lots of errors? If not, the right answer may be to sit this upgrade out, unless the upgrade is required to improve upon network security for example. 3) Don't upgrade during critical periods - Now that IT is communicating with you when updates are pending, you can advise IT when a good time for finance to do the update is. For example, in the middle of year-end is NOT a good time for just about anything, let alone an update to your CAFR / financial statement software. Now some processes never stop (payroll for example). For systems that relate to these processes, look for the least-bad time periods to do upgrades. 4) Test first - Develop a testing plan to ensure that all features function acceptably in your environment with your data, before you roll them out to the live system. This will involve several steps: Install in a test environment first or take full backups of all data before installing the new versions, and plan for some downtime if you need to roll back in the latter approach. Get key system users to test all functions that they rely on to ensure they perform as expected. Note - if you used the backup live data option above you will need to notify users that the system will be unavailable while testing is occurring. If the new version passes all testing, take a backup of the live data and update the real / live server or environment. If you did not have a test environment and took backups of data before testing, you have no more work to do other than advising users they can use the application again. 5) Plan for failure - Develop a plan to deal with failed test results.  If you followed steps 3 & 4 above, even with failed tests you are safe, no harm done. If you took a backup and then installed over the previous version of the software, now is the time to recover from the backup. Next, communicate the problem with the vendor about the problem. If it takes the vendor a month to resolve your issue, day-to-day operations have not been effected and while not optimal it is survivable. When they give you a new update version, start at Step 1 above and repeat. Following this simple 5 step approach is guaranteed to limit your stress, and avoid the vast majority of problems that haunt the dreams of finance officers everywhere. 
You rely on your key applications but updates and upgrades can be risky. Here are 5 steps to mitigate risk and guarantee better outcomes.
READ MORE
Tech for Execs: RDS & XenDesktop

Tech for Execs: RDS & XenDesktop

  • Waldo Nell
  • 07 April, 2016
  • Tech for Execs
  • minute(s)Microsoft RDS: Microsoft provides a feature that is built in to every Windows Server operating system called Remote Desktop Services (known as Terminal Services in Windows Server 2008 and earlier). RDS is Microsoft's technology to support thin client computing, where Windows software and the entire desktopof the computer running RDS, are made accessible to a remote client machine that supports Remote Desktop Protocol (RDP). With RDS, only software user interfaces are transferred to the client system. All input from the client system is transmitted to the server, where software execution takes place. Citrix XenDesktop: A similar but standalone product called XenDesktop (replacing Metaframe) is available from Citrix if you need more advanced functionality than Microsoft's RDS. Some version of this Citrix solution has been around for much longer than RDS.  In fact, Microsoft RDS was initially built based on Citrix technology, before they developed their own protocols.   The core functionality is largely similar to each other.  The primary benefit of Citrix XenDesktop over Microsoft's RDS is better support for audio, video and graphics.  Citrix XenDesktop is more powerful and used by organizations requiring more fine grained control over the virtualization experience.   See our recommended hardware specifications for your RDS or Citirix server, check out our other article.
Comparison of the two predominant thin client solutions RDS and XenDesktop
READ MORE
Thin Clients versus Fat Clients Explained

Thin Clients versus Fat Clients Explained

  • Waldo Nell
  • 07 April, 2016
  • Tech for Execs
  • minute(s)Has it ever seemed like the IT folks spends their days making up arcane terms and acronyms? Every time you turn around there is some new three letter acronym (obligatory acronym: "TLA") or concept being discussed. It can be enough to cause the finance officer to let it all go in one ear and out the other. But here are two important concepts you should know something about: fat clients and thin clients. In this Tech for Execs post we are going to explain these terms to ensure that you have a high-level outline of their meaning and the potential value they provide your organization. Why bother with these particular terms? After all, they are hardly new technology and you are likely not hearing about them as often as the latest buzz words like "cloud computing".   The reasons to care are three-fold: it is very likely your IT department is using or has recommended "thin client computing" to you (for good reason!). in many circumstances it can significantly reduce IT costs while increasing end-user performance. large CaseWare Working Papers files will almost certainly run MUCH faster in a Thin Client model. The Scenario You work for a government / university / corporation with many employees.  It is very likely that most of your users have somewhat similar needs in terms of applications and files.  All users would most likely want access to Microsoft Word and Excel, your organizations' line of business application(s), your accounting software if you are in the finance department and so on.  Further, you have files that need to be accessed by multiple individuals, perhaps simultaneously, perhaps from many different physical locations.  How does your organization deal with these requirements? There are numerous variations and exceptions with more innovations coming all the time. That being said, there are currently two predominant approaches that can be taken to meeting these needs. 1) Fat Client (Traditional) Model - only data resides on servers How it works: In the traditional approach, one simply provides each user with their own computer, install Microsoft Word, Excel, Outlook, etc. on each one and let everyone work independently. You start your computer and arrive at your desktop. Then you commence running your applications (Excel, Word etc.)  which are using your computer's resources (processor, memory, storage etc.).  If you have no network connection, you can keep on working with all the files that you have saved on the local hard drive. Things get more interesting when your applications need to use data off a server. When data is being accessed off the server, the following occurs: Workstation makes a request to server - "Please send me the data in this large excel file that has been saved on the X drive" Server pushes all the data (and if it is a big spreadsheet it might be a LOT of data) across the network cable to your workstation. You commence working on the spreadsheet; add rows, insert sub-totals, recalculate etc. This work is being done on your local machine but all saving or requests for more data are all going across your network cable. Again, when dealing with a big file, this can be very large quantities of data and therefore slow down the performance of the application. Pros & Cons: On the upside: Other users' activities on their computers have minimal effect on your performance. Minimal reliance on complex, expensive servers in this model. Typical use of servers in this environment is for file storage, checking & maintaining passwords and perhaps hosting a database. Users can be "self-sufficient" - as long as they have a laptop and the working data they need - when they go home on the weekend or if they travel to conference. The downside: Every computer needs to be maintained. They must be updated for security and bug fixes as well as the updates of each and every software program Each computer's hardware must be maintained at levels acceptable for the software applications they are going to be using. As the software programs require more resources, you must upgrade each and every computer that will use that program. If any data is stored on the local computer, it then needs to be backed up to protect the organization's data. If a new application is needed, it is likely that it has to be installed on multiple computers. Because each workstation brings all the data across the network cable to be worked on locally for each user, there is often a tremendous amount of network traffic. In a modern network this may not be an issue but if there are very large quantities of data or there are multiple physical locations that must communicate, the network bandwidth capacity may not allow for quick transmission of all the data required. Because of points 1 & 2 above the organization is committed to continuous investment in each individual workstation to ensure the hardware is capable of running the current version of the software.  This is where the second approach comes to the rescue.    2)  Thin Client (Virtual) Model - data & applications on servers How it works: In this model, instead of using multiple powerful workstations & laptops to run the application, all that work is done by one or more servers. Instead, end users are given terminals, sometimes called "dumb-terminals". Essentially all they do is present the user with a picture of a computer interface on their monitor and accept the users' input (mouse clicks and keyboard typing), sending it along to the the server for processing. All data, programs, and processing remains on the server. In many ways this model is a return to the mainframe model of the 1950s.   There are two primary technologies to discuss that drive this model, one provided by Microsoft ( "Remote Desktop Server" or RDS) & another provided by Citrix (XenDesktop). If you want/need to participate in a conversation with IT more fully about these technologies, you can find the details here.  Pros & Cons: The benefits of Thin Client computing are many: "Dumb Terminals" need little to no maintenance. Only the servers need to be upgraded & maintained. Only the server's hardware must be maintained at levels acceptable for the software applications and the number of users (more users = more horsepower required for servers).  No data can be stored on the local computer. Thus data is better controlled and backed-up. If a new application is needed, it must only be installed once - on the server. Thin Client software is long established and well know in the IT community. Network traffic is made more predictable / even. Potentially depending on the data being presented to the remote user, network traffic may even be reduced. Thus even with minimal network bandwidth capacity end users may not experience any slowness. Especially when remote access across the Internet / VPN is required, this may be the biggest benefit of all.  Because of points 1, 2, 3, & 4 the overall result is often much lower total cost of ownership for the organization versus the traditional model. But there are some concerns & drawbacks: If users need to deal with a lot of video & audio (especially creation & editing) the centralization of thin client computing can tax the servers tremendously resulting in poor performance. The servers (their software and setup) are more complex & expensive than the servers are in the traditional model.  The downside of centralization is a single point of failure. Should your application server fail, all your users will be affected. To mitigate this risk, IT must build in redundancy which offsets some (not all) of the savings. Users might be tied to a terminal and not able to move around the office or off the network to perform any of their work. To be clear, your IT department can provide remote access to the RDS / Citrix server, but they will often be very concerned about increasing security risks if they do so. Impact on CaseWare: Very large CaseWare Working Papers files used in a traditional model (data file on the server and application on your workstation) can be very slow, especially when network performance issues get in the way. There are some recommendations to maximize your CaseWare performance in this model but you might benefit by changing models. Moving to a Thin Client model with CaseWare can yield major performance improvements. To note though, due to the internal architecture of CaseWare Working Papers, the closer the Working Paper file is to the actual application, the better the performance.  And by close we mean not physical distance, but close in access time.  The best performance will be obtained if the Working Papers file is stored on the same RDS/Citrix server which hosts the CaseWare Working Papers application.   If your organization is not currently using one of these thin client solutions, and your only concern is CaseWare performance, we also strongly suggest considering CaseWare SmartSync.  To get more tech tips for executives, be sure to sign up for our blog In Black & White.  
This Tech for Execs explains Thin Client computing & enables finance professionals to understand the benefits both with CaseWare and generally
READ MORE
Tech for Execs: Ignorance is not bliss

Tech for Execs: Ignorance is not bliss

  • Waldo Nell
  • 08 March, 2016
  • Tech for Execs
  • minute(s)F.H. Black & Company Inc. works with governments, universities, large companies and accounting firms. The professionals we work with are typically auditers, finance officers, accountants and CFOs. They have great expertise in accounting & management and tremendous experience in their industries. They may even spend some time to refine their skill with the specific applications that they use (Excel, CaseWare, etc). Often though, they have minimal expertise with the general computer technology infrastructure that surrounds them.  Why should accountants and finance officers want to become more sophisticated with technology? Increase your professional performance: Improving the general technological literacy of finance professionals can yield a tremendous increase in their overall efficiency and effectiveness.  Improve your ability to assess & mitigate risk: Having at least a basic level of knowledge is essential to avoid fraud, data loss, identity theft and a myriad of other modern-day hazards. Not just personally, but as a critical part of the Internal Control process, finance's facility with technology impacts the entire organization's safety. FHB's series of articles on Technology for Executives ("Tech for Execs") will enable you to do just that - improve your fluency and ability to use technology to facilitate increasing your efficiency, effectiveness & reliability. An important starting point for this series of articles is to consider: Why it is that so many accountants are less than perfectly conversant with computer technology? Why is the need for expertise in computer technology different than that of any other tool you rely on? A very recent addition to the world The explosion of the computer age over the last 40 years is unlike anything the human species has ever encountered before. No other technology has grown to have such a overwhelming influence on so many people in so short a time. Due to this very rapid adoption, there is a segment of the world's population that is relatively inexperienced using this new technology. That is understandable - most people aged 40 and over didn't grow up with computers, and if you are in your 50s perhaps much of your early professional career was without significant computer usage. When you combine the short time-line with the magnitude of the change, computer technology aptitude is not necessarily an easy skill to develop.  A tool unlike others Regardless of age, most people use the internet like we use appliances, or our cars. We use them to provide a service to us, and don't care about its inner workings. Why would you want to understand the Otto cycle in your car? Or how the magnetron works in your microwave oven? Or how the escapement mechanism keeps your mechanical watch ticking? As long as it perform its function, we just don't care. Nor should we. You do not need to understand these things to drive your car or heat a cup of tea. If it breaks, you send it in to be repaired or you replace it. Not understanding the inner workings does not affect us negatively. With computer technology there is a superficial similarity. You do not need to understand nomenclature such as CPU, RAM, HTTPS or CHAP to be able to send a tweet, post a Facebook update or call a friend over your internet phone. But, this is where the analogy stops. Most of the technology you use today relies on connectivity. Sending email, browsing the Internet, accessing Facebook or LinkedIn means that your computer is connected to billions of other computers. This connectivity, uniquely among the tools and technologies you use, exposes you to massive risk. The difference between securing a thing like your house and securing your computer or smartphone is that for someone to break into your home, they have to be physically standing in front of your house. There are only a small number of people geographically close to you, severely limiting how many bad guys that could actually attempt to break in. With your internet connected computer, you suddenly have more than a billion people that can attack you. However, the threat gets even worse: most people do not have the knowledge or skill to even understand that they are under attack.  Or even had their computer system entirely compromised. Everyone understands how a burglar could break in to their house. He can come in through the window, pick your lock, use the roof etc. But almost nobody (statistically speaking) knows how bad guys get in to your computer. It is black magic, mysterious and spooky.  This does not mean everyone should be signing up for computer security courses. Or that you even need to know what the acronyms I mentioned earlier stand for. Just like you do not need to know what the Otto cycle is to be able to recognize the danger in crossing a road filled with cars and cross it safely, we will be teaching you how to better prepare yourself for this new era of being interconnected to each other. Naturally we cannot possibly cover all aspects as this is indeed a very complex landscape requiring long, intensive study to truly master. We will merely be applying the Pareto principle in trying to teach you 20% of what you need to know to make you 80% safer.  
Tech for Execs discusses why accountants, & finance professionals are likely not technology experts and why they should increase their expertise
READ MORE