Go sql null workarounds in Go

I want to use Go to create an API for an existing database that makes extensive use of null values. Go will not check empty values ​​for empty strings (or equivalent), and so I need to implement a workaround.

The workarounds that I discovered left me unsatisfied. Actually, I was looking for a dynamic language because of this problem, but Go has certain attractions, and I would like to stick with it if possible. Here are workarounds that did not satisfy:

  • Do not use zeros in the database. Not suitable, because the database already exists, and I have no right to interfere with its structure. The database is more important than my application, and not vice versa.
  • In sql queries, use COALESCE, ISNULL, etc. to convert zeros to empty strings (or equiv) before the data gets into my application. Not suitable, because there are many fields and many tables. Besides the few obvious ones (primary key, last name), I don’t know exactly which fields you can rely on to not give me a null value, so I will protect my sql queries everywhere.
  • Use sql.NullString , sql.NullInt64, sql.NullFloat64 etc. to convert zeros to empty strings (or equiv) as an intermediate step before setting them to your destination type. This is due to the same problem as above, only I hammer Go code instead of my sql queries.
  • Use a combination of * pointers and [] bytes to scan each item in the memory cell without passing it to a specific type (except [] bytes), and then somehow work with the raw data. But to do something meaningful with the data that you need to convert to something more useful, then you return to sql.Nullstring or if x == nil {handle it}, and this happens again in each case for any area, with which I need to work. So, again, we look at a cluttered, messy, error-prone code, and I repeat all the time, not DRY in my encoding.
  • Take a look at the Go ORM libraries for help. Well, I did it, but, to my surprise, none of them cope with this problem.
  • Make your own helper package to convert all null strings to "", null ints to 0, null floats to 0.00, null bools to false, etc., and make it part of the scan process from the sql driver, which leads to regular. normal strings, ints, float and bools.

    Unfortunately, if 6 is a solution, I have no experience. I suspect the solution will include something like "if the intended type of the object to be checked is a string, make it sql.NullString and extract an empty string from it. But if the item to be scanned is int, make it NullInt64 and get zero from this. But if ... (etc.) "

Is there something I missed? Thanks.

+7
go
source share
1 answer

Using pointers for sql-scan destination variables allows you to check the data, process it (provided that if! = Nil is checked) and sort by json, which should be sent from the API, without the need to put hundreds of sql.Nullstring, sql.Nullfloat64, etc. . everywhere. Stockings are wonderfully preserved and sent through Marshall John. (See Name of Name below). At the other end, the client can work with zeros in javascript that are better equipped to handle them.

func queryToJson(db *sql.DB) []byte { rows, err := db.Query( "select mothername, fathername, surname from fams" + "where surname = ?", "Nullfather" ) defer rows.Close() type record struct { Mname, Fname, Surname *string // the key: use pointers } records := []record{} for rows.Next() { var r record err := rows.Scan(r.Mname, r.Fname, r.Surname) // no need for "&" if err != nil { log.Fatal(err) } fmt.Println(r) records = append(records, r) } j, err := json.Marshal(records) if err != nil { log.Fatal(err) } return j } j := queryToJson(db) fmt.Println(string(j)) // [{"Mothername":"Mary", "Fathername":null, "Surname":"Nullfather"}] 
+1
source share

All Articles