In this article, we’ll take a look at what NTFS is and what its main features are. We’ll also see how it has evolved over the years to better understand why this file system has lasted for so long and continues to be Microsoft’s chosen file system.
The NTFS file system has been used by Microsoft for its server operating systems ever since it first appeared on Windows NT 3.1 in July 1993, and it is also used by the latest desktop systems.
NTFS is used as the main file system on the following operating systems:
- Windows Server 2022
- Windows Server 2019
- Windows Server 2016
- Windows 11
- Windows 10
However, given that this file system has been used ever since NT 3.1, it can also be found on older discontinued systems, such as Windows Server 2012, 2008 and their respective R2 versions.
But why is it that a file system that first appeared in 1993 is still so widely used today? Well, the answer is a little complex, but we are going to attempt to explain it as simply as we can.
The first thing we need to point out is that the original version of NTFS has very little to the version currently in use as it has evolved drastically over the years.
Secondly, it’s important to stress that this file system includes a number of features that make it highly secure and efficient, and this it is precisely this that is the secret to its longevity.
The Main Features of NTFS
To understand exactly why NTFS is so popular, we are going to take a look at some of the most important features as briefly as we can but without missing out anything.
Here are the main advantages of NTFS:
- Journaling file system
- Self-healing
- Greater reliability
- Greater data security
- Disk quota system
- Long filename support
- Compatibility with large volumes
- File indexing
NTFS is a Journaling File System
One of the main advantages of NTFS is that it is a journaling file system. This means that it includes a number of measures aimed at preserving data integrity and resolving any potential errors that might occur during data transactions.
As a journaling file system, NTFS works according to ACID principles (Atomicity, Consistency, Isolation, Durability).
The ACID model allows a system to group together file and registry operations in to single transaction to them either commit to them or roll them back. The system records the changes to be made before they are actually performed, and the changes will not be visible until the transaction is complete. This way, if an error occurs, the data can be reverted back to its original state.
These operations can also be used in conjunction with other systems, such as MSMQ (Message Queue Server), Microsoft SQL Server or even command line script.
NOTE: The ACID concept may sound familiar to some readers, particular if you have worked with databases. This term was coined in the 80s by Andreas Reuter and Theo Härder, referring to the necessary properties for reliable data transactions.
The aim of ACID is to ensure that file transactions are completed without interruptions and free from errors that might corrupt the data.
NTFS volumes are managed autonomously and transactions between volumes are coordinated by the KTM (Kernel Transaction Manager).
The KTM provides NTFS with a registry for each volume, which can then be used to recover data or cancel transactions if required.
Put together, all of this is usually known as “journaling”.
NTFS is Self-healing
Current versions of Windows Server feature a version of NTFS that can repair itself automatically. This makes it possible to offload the repair processes onto the system. Before, these processes used the Check Disk tool (chkdsk) to perform maintenance tasks, and this used to seriously affect performance and reduce the availability of the service.
This new model is the result of a substantial improvement to the kernel and the way it interacts with filesystems managers. It allows file system integrity to be maintained in a much more efficient and, above all, reliable way as it’s the system itself that by default performs the checks and corrects any errors.
The aim of this automatic self-checking is to minimise catastrophic situations that can be caused if an error affects the filesystem or even its mount.
This new model also enables greater traceability of disk state through the reports generated as the system checks itself.
Furthermore, administrators and technicians can use these reports to perform verification, monitoring or even audit processes that are much richer and add greater value to the data extracted.
Despite all this, however, there can be occasions when an offline manual recovery is required, but these occasions are much rarer because of the way the system maintains itself proactively.
For example, an offline operation may be required if damaged is caused that leaves the boot sector readable but the system cannot identify the file system as NTFS.
Later on, we’ll touch on maintenance options for NTFS, as well as chkdsk and other tools.
Greater Reliability
This point is heavily linked to the point above. With traditional file systems, if the system suffered an unexpected reboot or shutdown, it would often restart in an unstable state. This was largely due to the fact that the file system hadn’t finished all pending transactions and was unable to recover all data.
NTFS uses journaling to generate a record and a restore point to return to in the event of something like this. If it can, it will finish the pending transaction. If it can’t it will return to the most recent restore point that it can find.
As well as unexpected shutdowns, this situation can also be caused by cluster assignment errors or any other mechanical failure where the file system is hosted.
If an error is caused by a mechanical failure, the system will try to resolve it automatically through dynamic reassignment, without requiring any intervention on the part of the administrator.
In the most extreme cases, NTFS can try to recover files by reproducing the chain of transactions during which the error occurs. However, whilst the success rate of this procedure is fairly high, it doesn’t always guarantee a successful recovery.
Greater Data Security
Another of NTFS’s strengths is its increased data security as it allows the use of different methods of access control, including:
- Access control lists
- Drive encryption with BitLocker.
Access Control Lists (ACL) make it possible to set permissions for files and folders, specifying which users or groups can access data and to what extent.
With ACL, you can explicitly allow or restrict access with a high level of granularity. For example, you can allow users to see the contents of a folder but not open any files. Or you can allow users to open files but not edit them.
Usually, when using access control lists, everything is restricted at first, and permission are gradually enabled for those users that need access to certain resources.
On the other hand, drive encryption with BitLocker provides an additional layer of security in order to isolate the most critical data in encrypted NTFS volumes, thereby preventing anyone from accessing data if a disk is physically removed and connected to another system.
Although this is more of an informative article than a technical one, we should also mention that BitLocker allows encryption of devices on both x86 and x64 computers through TPM (Trusted Platform Module).
However, the best thing about BitLocker is it’s ease of use. All you need is a password or an authorisation method to access data.
Disk Quota System
The disk quota system on NTFS volumes is another great advantage compared with other file systems. With this really simple tool, you can configure, maintain and assign quotas or portions of a disk to each user.
These quotas allow you to limit the volume that each user or group can make use of, preventing a user from taking up excessive space on a disk and preventing all write functions once they use up their quota.
Quotas can be applied to volumes, folders or subfolders.
This quota system also allows you to perform the following tasks:
- Create quotas to limit the assigned space to a volume or folder.
- Generate notifications when a soft limit is reach so that a warning is issued.
- Generate notifications when a hard limit is reached, beyond which write functions are no longer possible.
- Generate automatically applicable quotas to all folders on a specific environment.
- Create easily applied quota templates.
- Set limits according to users or groups.
Long Filenames and File Paths
The NTFS filesystem can also handle much longer filenames and file paths than previous file systems.
In fact, NTFS allows paths up to 32,767 characters long.
This limit is way beyond the limit of 260 characters allowed by “MAX_PATH”.
Furthermore, this extension also applies to NTFS-based storage models like Cluster Shared Volume (CSV) where various nodes access the same data concurrently, ensuring no loss of service should one of the nodes fail.
Compatibility with Large Volumes
NTFS has evolved a lot over the years but so has storage capacity. Today, NTFS can handle volumes of up to 8 Petabytes on Windows Server 2022 and up to 256 TB on Windows 11.
The way that the disk is formatted when it is first configured has a direct impact on the size it can support. So, you need to choose an appropriate cluster size for larger disks, although you can always leave the size to be selected automatically.
Below, we’ve included a table so that you can see the sizes you need to choose for working with large NTFS volumes:
Cluster Size | Largest Volume or File |
4 KB (default size) | 16 TB |
8 KB | 32 TB |
16 KB | 64 TB |
32 KB | 128 TB |
64 KB (previous max.) | 256 TB |
128 KB | 512 TB |
256 KB | 1 PB |
512 KB | 2 PB |
1024 KB | 4 PB |
2048 KB (max. size) | 8 PB |
File Indexing
NTFS also includes a file indexing service. This means that the file system makes a note of all the data stored and creates a detailed index of everything it finds, whether data or metadata.
This means that searches will be much quicker and it will be easier to find the data you’re looking for.
The main problem with this process is that, when run for the first time, it can take hours, depending on the size of the disk. It can also severely affect disk performance. That’s why we recommend running this service at times when the system won’t be in use or will have a reduced workload.
The index maintains itself automatically. Every now and them, the process starts again in the background, looking for any changes to the file system. This is much less resource intensive and doesn’t affect performance.
What Does NTFS Do?
File systems can handle a number of different operational and maintenance tasks and NTFS is no exception.
Next, we’ll take a brief look at the following basic operations:
- NTFS maintenance
- NTFS permission management
- Data protection using BitLocker.
NTFS Maintenance
Almost all NTFS maintenance options are automatic, but sometimes you may want to perform them manually, whether it’s because you want to audit the system or because you know you’re going to have problems.
You can manually check the state of the file system using either the chkdsk command or the graphic interface.
To use the chkdsk command, use the following syntax:
chkdsk <destination> <parameters>
Where:
- <destination>: refers to the disk that you want to check.
- <parameters>: refers to the parameters or modifiers that you want to apply. These may include:
- /f to correct disk errors.
- /r to search for incorrect sectors and recover readable data.
- /x to force the volume to dismount first.
Here is a link to Microsoft Learn if you want to find out more about chkdsk and see all the parameters that you can use.
On the other hand, if you want to use the graphic interface, you can use the Error Checking tool.
To do this, simply open Windows Explorer and right-click on the volume you want to check. Then, click on “Properties”.
In the window that appears, click on the “Tools” tab, and then click on “Check” under the “Error checking” section.
Permission Management
Managing access and permissions on NTFS is probably one of the most complicated tasks to perform on a Microsoft system. This isn’t because it’s difficult to understand or do. It’s because you need to take extra special care to avoid any mistakes. If you assign the wrong access, this can be difficult to fix later.
If you don’t take care when configuring permissions, you could allow people to access locations that they shouldn’t have access to.
That’s why we think that this subject deserves its own article where we can talk about permissions and inheritance in detail. Watch this space for a new article soon.
Data Protection with BitLocker:
Protecting drives with BitLocker is relatively simply. By encrypting your disk volumes, you can protect your data so that it can only be accessed by authorised users using a password, token, biometric marker, etc.
This feature isn’t available on all versions of Windows. For example, it’s not supported by Window 10 Home Edition.
The activation process is very simply. All you need to do is open the file explorer, right-click on the disk in question and click on “Activate BitLocker”. Then, you’ll be asked for an access method and a password and once configured, BitLocker will be active.
NOTE: Volumes that already contain data should be backed up first in case there are any problems when activating BitLocker.
Conclusion
NTFS is a widely used file system on Microsoft servers and is also used on many desktop operating systems.
Over the last 30 years, this file system has continuously evolved and its really useful features have made it into a stable, secure and efficient system.
It is highly reliable and the majority of features manage themselves, so the requirement for manual intervention is minimal.
We hope that you’ve found this article useful, and that you now understand a little better what NTFS is and what it does.
Thanks for reading!