Conventional floppy and hard disk drives record data in separate concentric non-overlapping magnetic tracks parallel to each other, with magnetic recording direction being longitudinal / horizontal in older drives and recording direction being perpendicular / vertical in extended density (ED) floppy disk or perpendicular magnetic recording (PMR) hard-disk drives. In shingled recording, tracks are written overlapped to move tracks closer together and squeeze even more tracks onto a certain area of the magnetic surface. Ever since floppy drives, read heads are narrower than write heads. Single/Double Density floppy disk write heads leave a broader track on the magnetic surface than High Density write heads. This never changed and even with today's technology, write heads are wider - thus the idea of overlapping tracks emerged. When a next track is written on top of the previous track with a certain amount of overlap, only a part of the previous track remains "exposed", an area that is wide enough for the narrower read head to be read.
The main problem with this approach to pack data onto the magnetic surface is a complicated re-writing process. If data in a stream of data laid onto the disk surface needs to be changed, all subsequent data after a possibly only single bit of data that was changed needs to be re-written, as the shingled "packed" data structure can't be "untangled" for this change. Thus all data after the change has to be re-written and the SMR track effectively becomes a linear medium. That's why SMR drives are divided into a number of append-only (sequential) zones of overlapping tracks that need to be rewritten entirely when full. These bands of SMR data areas are usually complimented with some type of cache to store more randomly accessible data, for metadata or as data buffer. These additional storage areas can be "traditionally" PMR written tracks on the drive's platters or DRAM or flash memory or a combination of those.
Device-managed SMR devices hide this complexity by managing writes to the physical surface in an optimized maner via their firmware, presenting an interface to the host system that resembles any other "traditional" hard disk drive. They do this by buffering data and metadata in large caches, many gigabytes in size. This works well for big-data, append only workloads, where files are written once and then rest on storage for extended periods afterwards. It becomes a problem when a drive is exposed to more random, often updated and probably mostly smaller object read and write workloads with very limited or no idle time. In this case the cache system will quickly fill up and the drive's shingled recording areas can't catch up with new data being ingested. The SMR drive then will get busy writing new data while re-organizing old data scheduled to be re-written, all at the same time. SMR drives then experience a sharp drop in performance, dropping from megabytes to probably only bytes of data being actually written to disk.
Some types of SMR drives thus expose these internal logics to the host system. Host-managed SMR drives depend on the operating system to know how to handle the drive. Software is then able to use shingled data storage layouts for longer term storage and delegate smaller or often updated objects to different storage subsystems or schedule it to be written in batched writing schemes. Some systems offer the ability to formulate such rules into "storage policies".
In recent years, sometimes end users weren't aware of the actual underlying technology used in a drive and used so called "archive" drives, which usually employ SMR technology, for heavy random workloads, and were surprised by a drive's low performance. Thus it is essential to know which technology a HDD uses. SMR drives can be used in RAID systems, but controllers and SMR drives types (drive managed or host managed) need to be chosen appropriately.