This design principle demands that all possible negative aspects be made explicit. If negative aspects are not made explicit, then users will be negatively surprised sooner or later.
Examples:
- A security mechanism that requires complex passwords is considered secure, ignoring the fact that most people write their passwords down close to where the prompt is.
- A site claims to provide impartial news but all items are biased.
- Logins to differentiate users. This forgets that LoginsAreEvil.
Contrast with SecurityByObscurity, DeceptivePractice
The above description is part of it, but it isn't basic enough to capture what AvoidIllusion means. SoftSecurity is built on trust. AvoidIllusion is about not giving false, undeserved trust. Quoting from SoftSecurity, "A weak security system can be worse than no security at all, because it may lull users into unwarranted trust. UserNames without passwords may be safer than with, because everyone will know they are forgeable." Maybe it can be summarized as, "Don't build paper fences." If you make it seem that you have impregnable security, people will behave more loosely because they trust the security will catch any problems. However, if it isn't impregnable, when that security is breached, people will be unprepared to deal with the situation. They might not even be aware there is a situation to be dealt with.
Another problem with apparently "impregnable" security is that it seems to attract attackers and vandals looking for a challenge (which is likely, given that the best defense against same is to wait until they get bored and leave ... which puts them in a frame of mood to look for another challenge). Maybe you do have impregnable security, but can you afford to have your strength drained daily as attackers and vandals probe and test your security?
Consider also: TrustButVerify, and TolerateAmbiguity