Unique constraint field1 OR field2 should be unique - database-design

My client has huge legacy DB and x-huge code base. There is a table "user" which contains user data with his email and 1-1 table "contact" that contains user contact info including a phone number.
Accordingly to business logic a user should have unique email OR phone. I.e. there should not be users with the same phone or email in DB.
How this rule is enforced currently. In API method "add_user" there is following code (I using some pseudocode)
if ((user == null) &&
(phone == null)) { throw error "email of phone should be set}
if ((email != null) &&
(db.getUserByEmail(email)!=null)) { throw error "user with such email exists" }
if ((phone != null) &&
(db.getUserByPhone(phone)!=null)) {
throw error "user with such phone exists" }
//we are OK,
db.addUser(phone,email, ...)
This code has problem - two parallel calls at the same time will create two users. Doing calls synchronously is not an option. There are several API servers behind a load balancer.
I have following solution in mind:
1. in the table User create a column "uq_key" that will contain combined type+field value e.g. "e|abc#domain.com" or "p|1(234)2435234"
2. create a unique index on this field
Question: What are different ways to enforce this values to be unique? Table level locks is not an option, something else?

Related

Google App Engine - Datastore - Get an entity, not by it's key

I'm building an Android app, and I'm using Google App Engine to store user's data. I want users to be able to connect from other devices to their account, but I could not figure how.
The users table (kind) has 4 properties: id, email, nickname and level:
I have read this:
https://cloud.google.com/appengine/docs/standard/java/datastore/creating-entities
It's written their that I can only get entities by their key, and it's a problem for me, because in a new device, I can only get the user's email, not the key and Id. I need to get the key and id by their email.
Is this possible?
If it's not, is their any other solution?
You just need to do a simple query. What language are you using in the GAE backend? For example, in Python, you would do something like:
def get_user_by_prop(prop, value):
this_user = User.query(getattr(User, prop) == value).get()
return this_user
Judging from the link in your question, I assume you are using Java? Here are the docs for a query: https://cloud.google.com/appengine/docs/standard/java/datastore/retrieving-query-results
where they use the example:
Query q =
new Query("User")
.setFilter(new FilterPredicate("nickname", FilterOperator.EQUAL, "billybob"));
PreparedQuery pq = datastore.prepare(q);
Entity result = pq.asSingleEntity();
What is stored at the id property? Does it have some meaningful value or just random unique number to be used as a unique identifier?
It seems like you can design your database differently so the email address would be your unique identifier. In that case, you will have a User table contains the following properties:
email (your unique identifier), nickname and level.
That way you will be able to use the following code snippet:
Key userEmail; //Get user's email
Entity user = datastore.get(userEmail);
Regardless of that, you are still able to access your entity without the need for the entity's key, by using a query. That way you won't be using the entity's key in order to get its instance but you would rather get the desired entity by using the given property value and finding the matching entity with that property value.
The query would look something like that:
String userEmail; //Get user's email
Filter propertyFilter =
new FilterPredicate("email", FilterOperator.EQUAL, userEmail);
Query q = new Query("User").setFilter(propertyFilter);
PreparedQuery pq = datastore.prepare(q);
try {
Entity user = pq.asSingleEntity()
//Same as Entity user = datastore.get(userEmail) mentioned above
} catch (TooManyResultsException e) {
// If more than one result is returned from the Query.
// Add code for dealing the exception here
}

How to ensure that no duplicate records created using LINQ to SQL in multi-threading applications?

Creating the records follows a simple flow: check if there is no similar record in the DB, and if not, create a record. The seriesInstUid is not the primary key. The primary key is created by SQL server as it has the identity property setup on server.
Like code below:
DataClassAsclepiusImagingDataContext db = new DataClassAsclepiusImagingDataContext();
var matchingSeries = from s in db.Series
where s.DDSeriesInstanceUID == dd.seriesInsUid
select s;
if ((matchingSeries == null) || (matchingSeries.Count() < 1))
{
Series ser = new Series();
db.GetTable<Series>().InsertOnSubmit(ser);
db.SubmitChanges();
}
The problem occures when a few concurent threads trying to execute the same code in a rapid succession, a record may be created by another caller in between the "if record exists?" and "if not create new record". In such case the first caller would create a duplicate.
What is a good way to ensure that the duplicates are not created in this scenario?
Here are couple of ways to do it.
1) You can add a unique constraint on the database to be absolutely sure that no duplicates can be created.
2) Encapsulate the data insert code within the lock block to ensure that only one thread can execute the insert at a time.

WPF Binding issue (UNIQUE CONSTRAINT violation on UPDATE) how to reject changes?

ok please be gentle, I am new to WPF and LINQ - I have a strange problem here. I have a search screen and an add/edit screen. I have the add/edit screen bound to a 'CompanyContact' object and a search screen bound to a collection (CompanyContacts).
I have a 3 column unique constraint (FirstName, LastName, CompanyId) on the CompanyContact db table so you can't have the same name appear twice for the same company.
I should also mention that I have an "AFTER UPDATE" trigger on CompanyContact table's 'ModifiedDate' column which refreshes the ModifiedDate because I don't like allowing the client PC to dictate the modifieddate/time... (I want the database to keep track of when the record was modified). I let the DEFAULT CONSTRAINT put GetDate() into this column on INSERTs.
let's say there is a "Steve Smith" at CompanyId 123 and there is also a "Steve Smith2" at CompanyId 123
If I attempt to edit an existing company contact (Steve Smith2 #CompanyId=123) and change the last name from "Smith2" to "Smith" so that it causes the Unique constraint to fire (collision with Steve Smith # CompanyId=123), everything seems to work fine (i.e. I made it so that the Edit screen traps the SqlException and 'resets' the properties back to their original values by resetting the base.DataContext and the user is notified - "hey you can't do that... it would cause a duplicate record") but when I dismiss the Edit screen (Click the CANCEL button) and return to the Search screen, the offending data is showing in the Search results... (i.e. there are now 2 records showing Steve Smith # CompanyId 123)
I have tried many things, including writing code in LINQ to check for duplicates before attempting to UPDATE... but it seems like there is a simpler solution than that? I am a big believer in putting rules into the database so there is consistent enforcement of a rule i.e. so that rules are enforced the same for everyone including those people who make work directly against the database (on the backend)
here's a snippet from Add/Edit screen (Search screen can call this function)...
public CompanyContact EditCompanyContact(int companyContactId)
{
CompanyContact myCompanyContact;
try
{
_screenMode = ScreenMode.Edit;
myCompanyContact = new CompanyContactRepository().GetById(companyContactId);
//experimental code -- use this to reset base DataContext if unique constraint violated...
_originalCompanyContact = (CompanyContact)myCompanyContact.Clone();
//make sure to clone the object so we can discard changes if user cancels
base.DataContext = (CompanyContact)myCompanyContact.Clone();
SetupScreen();
this.ShowDialog();
//if user cancels Edit this is 'reset' to originalCompanyContact
return ((CompanyContact)base.DataContext);
}
finally
{
}
}
and here is code from the 'cancel button'
private void btnCancel_Click(object sender, RoutedEventArgs e)
{
try
{
//HACK: this allows us to discard changes to the object passed in (when in EDIT mode)
//TODO: research a truer WPF approach? (RoutedEvents?)
_userCancelled = true;
base.DataContext = _originalCompanyContact;
this.Close();
}
finally
{
}
}
here is the code that is executed when you try to Save on the Add/Edit screen:
try
{
if (base.DataContext != null)
{
CompanyContactRepository ccr = new CompanyContactRepository();
cc = ((CompanyContact)base.DataContext);
ccr.Save(cc);
}
//dismiss the form after saving CompanyContact
this.Close();
}
catch (SqlException sx)
{
if (sx.Message.IndexOf("Violation of UNIQUE KEY constraint 'UN_CompanyContact_Value'.") == 0)
{
MessageBox.Show(String.Format("a CompanyContact with the name ({1} {0}) already exists for {2}", cc.FirstName, cc.LastName, cc.Company.Name), "Duplicate Record", MessageBoxButton.OK, MessageBoxImage.Exclamation);
}
else
{
//yes - catch and rethrow is probably a bad practice, but trying to ISOLATE UNIQUE constraint violation
throw sx;
}
}
finally
{
}
and here is some LINQ code for the Save (sorry it is FUGLY! - I've been hacking around with it all day)
public void Save(CompanyContact entityToSave)
{
try
{
var saveEntity = (from cc in db.CompanyContacts
where cc.CompanyContactId == entityToSave.CompanyContactId
select cc).SingleOrDefault();
if (saveEntity == null)
{
//INSERT logic
entityToSave.CreatedById = new CompanyPersonRepository().GetCompanyPerson(DataContext.Default.LoginUsername).CompanyPersonId;
entityToSave.ModifiedById = entityToSave.CreatedById;
db.CompanyContacts.InsertOnSubmit(entityToSave);
db.CompanyContacts.Context.SubmitChanges();
}
else
{
//UPDATE logic
saveEntity.ModifiedById = new CompanyPersonRepository().GetCompanyPerson(DataContext.Default.LoginUsername).CompanyPersonId;
saveEntity.CompanyId = entityToSave.Company.CompanyId;
saveEntity.FirstName = entityToSave.FirstName;
saveEntity.LastName = entityToSave.LastName;
saveEntity.CompanyContactTypeId = entityToSave.CompanyContactTypeId;
db.CompanyContacts.Context.SubmitChanges();
}
...
OK I found a solution, but it seems like it is more of a hack than a true solution... surely there is a better way to fix this?
I put a catch block in my LINQ Code (in the Save() function) like this:
if (sx.Message.IndexOf("Violation of UNIQUE KEY constraint 'UN_CompanyContact_Value'.") == 0)
{
//this will refresh the cache... so you won't see things that violate constraints
//showing on the SearchCompanyContacts if you return from Edit CC screen...
db.Refresh(RefreshMode.OverwriteCurrentValues, db.CompanyContacts);
throw sx;
}
else
{
throw sx;
}
this feels very hollow and unsatisfying.. I really don't like the fact that I am catching and rethrowing the exception, but it's the only way to have some detail to explain why the save failed... this wouldn't be necessary if I did not need to do the db.Refresh() call... (I could just let the SqlException bubble up the call stack and I could handle this with an ExceptionHandler class)...
I also changed the logic so that I return NULL from the Edit screen if the user cancels out, then on the search screen, I have this small change too...
selectedCompanyContact = editCompanyContact.EditCompanyContact(selectedCompanyContact.CompanyContactId);
if (selectedCompanyContact == null)
{
//refresh from db..
ExecuteSearch();
}
so that the search screen refreshes the search results and the "reset" value (i.e. original data) is shown.
but surely this cannot be the proper way to write LINQ? I would expect there is some sort of setting I can define that says, if the database rejects it (UNIQUE CONSTRAINT violated, FOREIGN KEY violated, etc.) then throw away the stuff cached in the collection that violates any database rules... what's interesting right now is that the search screen and the edit screen don't immediately erase what the user typed, but once I hit the Cancel button the screen 'refreshes' and you see the original value...
one thing I like about this solution is at least the (Edit) screen does not get dismissed if you violate a rule (you get a 'duplicate record exists' message) and you can still see your data 'unaltered' in the fields which are bound to the entity 'to be saved' so you can review the data and say "aha... I see why this data was rejected" then you can change your data and the "OK" button will continue to reject your bad data until your data meets all rules defined in the database, so if you want, you can hit the CANCEL button then it will throw away your changes and the search results screen will refresh to the "old data".
I was intrigued by this function:
db.CompanyContacts.Context.GetChangeSet().Updates.Clear();
but unfortunately it's a read-only list so a call to this fails... but I am wondering if there's a way to use an object from the list (e.g. db.CompanyContacts.Context.GetChangeSet().Updates[0]) to be able to take the offending entity out or detach/attach - not sure if there's something I'm missing or not, but it feels that way.
I am serious.. and don't call me Shirley... ;-)

multi location updates security rules in firestore

how can I ensure atomic writes of multiple document writes on client side only?
for example, when creating a record, a record document is generated and concurrently, a record log document is also generated (two different locations).
I wouldn't want the user to override by creating only the record document without the record log document. is this possible with the current firestore security rules?
Use the getAfter() function to look at what the state of a document would be after a write or set of writes. You can use this to make sure a value in another document is updated, for example:
service cloud.firestore {
match /databases/{database}/documents {
// Allow a user to update a record or log only if they keep the timestamps equal for the same ID
match /records/{record} {
allow write: if request.auth.uid != null &&
getAfter(/databases/$(database)/documents/logs/$(record)).data.timestamp == request.resource.data.timestamp;
}
match /logs/{log} {
allow write: if request.auth.uid != null &&
getAfter(/databases/$(database)/documents/records/$(log)).data.timestamp == request.resource.data.timestamp;
}
}
Docs: https://firebase.google.com/docs/firestore/reference/security/#service_defined

Read / Write only own records on Firestore [duplicate]

This question already has an answer here:
Firestore rules on object data type
1 answer
I am rather lost with Firestore Rules.
I want authenticated users to be able to read their own records, but cannot manage to achieve that. I am writing the userId into each record. When reading, I expect the user will get all records where field userId == request.auth.uid Here is my code from the Firestore console:
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read: if resource.data.userId == request.auth.uid;
}
}
match /databases/{database}/documents {
match /{document=**} {
allow write: if request.auth.uid != null;
}
}
}
Writing is OK, but with reading I get "missing or insufficient permissions" exception in my app. I checked that FirebaseAuth.getInstance().getUid() is returning a value matching to my userId field.
According to Todd Kerpelman
Cloud Firestore doesn't have the time to search through every record in your database to ensure that your user has access, so it will reject this query. Instead, you'd need to run a query where Cloud Firestore can "prove" that all documents you'd retrieve will be valid.
Solution:
Try with this query, specifying that you want to receive only the user's documents:
FirebaseFirestore.getInstance().collection("docs").whereEqualTo("userId", uid )
where uid is the uid of your auth user:
String uid = FirebaseAuth.getInstance().currentUser.getUid()

Resources