It's sounds like your client does not have an active SQL Maintenance Plan that runs Check Database Integrity Task periodically, which is basically designed to catch this kind of issues whether or not the particular SQL Server is even being backed up. Is that the case?
Now, if we are discussing a situation when client shifts the responsibility for monitoring database consistency to a backup application - then my answer will be long, because the issue is much more complex than the way you put it.
But first, to answer your question - yes, this particular issue your client faced can happen with any block-level incremental backups in general (not just VM snapshot-based backup). But even with a file-based backup, file system level issue will only surface if backup process copies changed files in their entirety, i.e. by reading the whole file - approach usually impossible today from a backup window perspective (at least with the amounts of data most people need to back up these days). However, as soon as you start reading only changed blocks - which is actually what even "legacy style file based backup" tools do these days to be able to fit backup window - then backup process will no longer implicitly "validate" the backed up files for consistency either.
However, all of this does not even matter much, because the whole issue is much bigger if you keep in mind that in addition to file system level file corruption issues (physical integrity), there are also application data corruption issues (logical integrity). For example, the latter is actually most common corruption type for our backup files due to "bit rot". Now, in case of application data corruption, even a legacy style file based backup that DOES read the entire changed files will not detect the corruption - because from the file system perspective, the file is perfect (when its content can be complete rubbish from application perspective).
In fact, the above issue is EXACTLY the reason why we recommend so strongly against using storage-based replication to off-site Veeam backups - as this process also simply copies the entire backup files (thus implicitly validating their physical integrity), but it does not validate their content for consistency from application perspective (logical integrity) - so you can potentially end up with an offsite backup that is as unrecoverable as the primary backup due to bad payload.
So, the real solution that will address all corruption types is only one - and that is to run application-specific data test that will read the entire data pool used by the application and validate its logical integrity using application-specific methods. This test cannot complete successfully without physical integrity, so you don't even have to worry about the latter. Going backup to my Veeam backup files example, such logical integrity test is done inline by Backup Copy job (as it reads the content of copied restore points), and also by storage-level corruption guard (for data at rest).
Now, what about production applications being backed up, like SQL Server? Well, again - if you decide to move the responsibility for application data consistency monitoring to a backup application (which may not be a good idea to start with) - then you can only perform such test during the backup itself (which is impossible these days from backup window perspective), or after the backup has been completed. And Veeam actually makes it possible with SureBackup - which allows to automatically spin up any VM directly from backup, and run any test against the application easily. For example, for SQL Server specifically, it can be a test script with DBCC CHECKDB
query against database running in a SureBackup VM.
Does this answer your question?