Posted: Thu Jan 24, 2008 2:50 pm
What you have described sounds good, as far as it goes. For a certain group of people, mostly within the Church, it should work quite well. I could be wrong, of course, but your scenario seems to assume that the (soon-to-be-non-duplicated) base of names in the NFS will be the base upon which all others will build. What if someone who is not linked historically to current Church members wants to jump in and use the system, perhaps someone among the maybe 4 million genealogists outside the Church? How would they synch up with others in the cooperative fashion you have described, without some common base to start from? Would they need to wait until some Church-member-researcher happens to add a name that provides a link for the outsider into the existing database, so they will have a logical place to add their data? Will they even have the necessary logon permissions to be able to add all the data they have collected, whether it is in their direct ancestral lines or not? If they just dump new, unconnected data into the NFS system, will that not set off some “potential duplicate” alarms, figuratively or actually? Will it bring up worries about huge manual efforts needed to reconcile and eliminate duplicates that may have been added?
That is why I am thinking of a more generic method to help more people cooperate, often independent of the current NFS database contents. To begin with, in either case, NFS or my idea, people would be putting in their research work as it is. But in the new method I suggest, at the very point that the work goes in, in most cases, the best-researched and best-lineage-linked version of data about any particular person can be picked out mostly automatically from among what is available at that point. (Or there might be a monthly run to perform this “best data” selection process). After that, there would usually be no need to reconcile any other existing duplicates. The unused ones could just slide into obscurity, as the accepted version grows in completeness and quality through the kind of joint improvement you suggest. Strangely, to minimize the confusion and wasted effort from duplicates, the system needs to be highly tolerant of duplication, accepting it as normal and not disruptive or wasteful, because there is an easy way to pick out the best and leave the rest.
As it is, I believe there is an assumption within the NFS that all duplicates need to eventually be reconciled. Its goal is to be very intolerant of duplicates. But I believe that is mostly because we already have this relatively high-value ordinance data associated with essentially all of them. In the more generic case I am concerned with, the preeminent goal would be to help everybody find the best data anyone has entered and run with it, improving it in a cooperative way, and largely ignore the many other less complete versions that may be floating around in there. A small part of the selected and improved data may eventually have temple work associated with it, but that is not the main concern to begin with. Most outsiders will not share our interest in that aspect of genealogical research.
That is why I am thinking of a more generic method to help more people cooperate, often independent of the current NFS database contents. To begin with, in either case, NFS or my idea, people would be putting in their research work as it is. But in the new method I suggest, at the very point that the work goes in, in most cases, the best-researched and best-lineage-linked version of data about any particular person can be picked out mostly automatically from among what is available at that point. (Or there might be a monthly run to perform this “best data” selection process). After that, there would usually be no need to reconcile any other existing duplicates. The unused ones could just slide into obscurity, as the accepted version grows in completeness and quality through the kind of joint improvement you suggest. Strangely, to minimize the confusion and wasted effort from duplicates, the system needs to be highly tolerant of duplication, accepting it as normal and not disruptive or wasteful, because there is an easy way to pick out the best and leave the rest.
As it is, I believe there is an assumption within the NFS that all duplicates need to eventually be reconciled. Its goal is to be very intolerant of duplicates. But I believe that is mostly because we already have this relatively high-value ordinance data associated with essentially all of them. In the more generic case I am concerned with, the preeminent goal would be to help everybody find the best data anyone has entered and run with it, improving it in a cooperative way, and largely ignore the many other less complete versions that may be floating around in there. A small part of the selected and improved data may eventually have temple work associated with it, but that is not the main concern to begin with. Most outsiders will not share our interest in that aspect of genealogical research.