Have an option to use the Duplicate Constituent 7.92 criteria or specify your own criteria

In The Raiser's Edge 7.92, the new duplicate criteria feature does not allow someone to alternatively check for duplicates using only certain fields. This poses an issue when you are importing new constituents and the address is different. It would be nice to have an option to use the new algorithm as a default or specify your own duplicate fields as they existed pre-7.92.

  • Guest
  • Jun 14 2012
  • Reviewed: Voting Open
  • Attach files
  • Jacquelyn Jones commented
    August 23, 2021 11:56

    YES!

  • Guest commented
    August 20, 2021 21:00

    Yes! Please develop this important feature. We definitely need it!

  • Admin
    David Springer commented
    August 20, 2021 20:55

    Changing the status of this idea to Voting Open because it's been reviewed. While not currently planned, this will remain open for voting and future consideration as a change.

  • Guest commented
    March 17, 2016 22:43

    YES! There are going to be times when I legitimately need to define my own rules for duplicate criteria. It doesn't make sense to take away options that existed previously. I have a list of 1000s of people and their business address. Some of those people already exist in our database, BUT with their HOME address. The new algorithm does not consider them duplicates. My only workaround (outside of manually entering 1000s of records) is to only import them by name to find duplicates. Then I will have to export them and re-import their business address as an additional address on their record. VERY VERY time consuming!

  • Guest commented
    March 17, 2016 22:43

    Absolutely needed. We import 500,000+ records each year in a name acquisition campaign. Some of the records have names and some do not. I used to change the duplicate criteria prior to importing based on whether a name was provided...for records with no name, I looked only at address lines. For records with names, I looked at last name plus address lines. I have no idea how to manage this now...HELP!!!

  • Jacquelyn Jones commented
    March 17, 2016 22:43

    Today was my first morning with 7.92. The inability to select duplicate search criteria added to my daily work rather than improving it. I import roughly 10,000 a week. With editable critieria as in 7.91 I was left with about 300 records as possible duplicates that are being manually looked up to import the additional data collected. With the new hard-coded criteria I am afraid to think what is going to happen to that manual look-up count.

  • Guest commented
    March 17, 2016 22:43

    Please Change This Back!!!! What used to be a manageable PDF of less than 500 records has ballooned to 4,000-5,000 records and takes FOREVER to run! This part of the program is meant to help clean up our databases. Not make it more frustrating.

  • Guest commented
    March 17, 2016 22:43

    Chris Moy - are you on Patch 6 yet? Patch 6 resolves the time issue and the records not really duplicates.

  • Guest commented
    March 17, 2016 22:43

    Why was this changed? What were the benefits of the update? I assume you received a lot of comments from people asking for this new program. What didn't they like about the old program?

  • Guest commented
    March 17, 2016 22:43

    It was SOOO limiting. It could only do exact matches and only match on certain criteria with no OR matching. (i.e. if last name and first name match OR if last name and address match OR if emails match, etc.). The new feature uses a complex algorythm (which admittedly needed and still needs some tweaking) whcih will catch people like Bob Smith and Robert Smith which the old system could NEVER do. It is so much better.

  • Guest commented
    March 17, 2016 22:43

    The new algorithm is great for users who are doing single searches but for us - we bulk import! We are not at all happy with losing the ability to define the duplicate search criteria. We have invested a lot of time and money in reducing duplicates on our database and it was looking as if our investment was paying off until we upgraded to 7.92. Now we are still waiting to succesfully run our first duplicate report after a couple of month of trying. One reason apparently is "you do have a large database!" - Come on Blackbaud we thought you were thinking BIG! Give us back the ability to choose our own criteria don't take something away that was working!

  • Guest commented
    March 17, 2016 22:43

    Shirley B. - are you on the most recent patch? What about that version's duplicate report is stil not working for you?

  • Guest commented
    March 17, 2016 22:43

    Hi Melissa
    we have been working with BB and so far it takes four days to run on a dedicated server with the latest patch. I hate to inconvenience our users, so may have to wait for a bank holiday weekend to run. But that is not really the point - the point is that we have lost the ability to choose which we would like back/

  • Guest commented
    March 17, 2016 22:43

    I didn't realize it was still taking anyone more than a few hours to run. How long did it take before?

  • Guest commented
    March 17, 2016 22:43

    Pre 7.92? a few minutes

  • Guest commented
    March 17, 2016 22:43

    It would be great to have an "Include..." - all records, select records or one record option to the duplicate report feature. This would come handy after an import. For those of us importing more constituents while still working through the pages (and pages) of results the algorithm produced.

  • Guest commented
    March 17, 2016 22:43

    The criteria I used before the upgrade worked perfectly well for us (with a database of about 50,000). I am currently trying to validate an import file of about 200 names so I can identify duplicates before importing. Then I can remove the true duplicates and import the remainder. I can't tell you how long the process took because my computer stalls ("program is not responding"), but after 4 hours it was less than half done. The last time I did this with a much longer list of names, it took well under 30 minutes.

  • Guest commented
    March 17, 2016 22:43

    Has anyone purchased the dedup customization plug in for cleaning up duplicate records in Raiser's Edge? Do the results justify the customization cost and annual maintenance fee?

  • Guest commented
    March 17, 2016 22:43

    We import large amounts of contact data where the email is the one and only unique identifier to de-dupe against the rest of the database. Not having the ability to select email as the duplicate criteria is going to be hugely time consuming. Please give us the CHOICE!!

  • Guest commented
    March 17, 2016 22:43

    Anonymous, I am pretty sure the new algorythm in 7.92 factors in email in the duplicate criteria so I think you should be fine.

  • Load older comments
  • +68