Because collaborations are social works, social problems like dealing with malicious persons or even just errors and inconsistencies must be dealt with socially. At least ultimately. Technology certainly can help (e.g. AuditTrail), but it isn't a panacea.
The best way to ensure that the information is correct is through PeerReview à la academic journals. This ensures that actual peers — people with the apropriate expertise — review. There are different forms of academic peer review, however: single blind (reviers don't know the author), double blind (reviewers and authors don't know each other), and occasionally triple blind (editors, reviewers and authors don't know each other). Since it is hard to be truly anonymous in a small academic niche (think writing style), this doesn't always work: power structures remain in place. Transparent academic peer review is practiced by some OpenAccess sites such as http://ArXiv.org
A page that is universally editable (such as on wikis), while universally destroyable, is universally fixable. Hence, wikis practice extreme peer review. And since wikis have a public AuditTrail (PageHistory, RecentChanges), it's extreme transparent peer review.
Other review systems involve annotations like DiiGo, FireTrail (CritDotOrg currently defunct) or threaded discussion like SlashDot. However, these suffer by forcing readers to read both good and bad information before reaching a synthesis.
Ultimately, a collaboration works best because peers make up for each others' weaknesses and mistakes. It's not even necessary that all peers work to review, as merely the TeethToTailRatio must be maintained. As a whole, even with a few, the group is then very strong. This is exactly why the Wiki:ScientificMethod is so successful.
Strong yes. But in a good sense? Again, successful in what sense? --RichardDrake
Successful in the sense that Science is remarkably free of totally bogus ideas. Some may linger, but they will likely be eradicated in time. Strong in the CollectiveIntelligence sense.
See also Wiki:PeerReview"", and ReversibleChange for some (reversible) means of PeerReview. For a case study in how you can end up with an infinite recursion of PeerReview if you're not careful, see MetaModeration.
PeerReview consists of two parts, the PeerPart and the ReviewPart.
See also AuditTrail, EnforceResponsibility, AcademicPeerReview, and the GuildModel of peer review.
I believe this focuses almost exclusively on the ReviewPart, which is the actual content of the review. The other significant part of peer review is in selecting (or assessing) the reviewers, or the PeerPart. This is in my opinion given short shrift here, and is a large part of what I've been trying to address with systems such as the ScoopEngine. -- KarstenSelf 8 April 2001
One technical option to ensure the community is being diligent with PeerReview is to ensure each edited page was matched with at least one non-author view. This could be done through the access_log alone. However, since the access_log only tracks IPs, it is easy to dupe (edit at work, view at home). This would violate AvoidIllusion, perhaps encouraging people to be lax in their review of others.
I'm not a member, but apparently on TheWell they have a system where a host can hide a comment, but the comment can be unhidden by the author if they choose.
See also DelayAction, AcademicPeerReview.