Making Changes to Many Forms In Access

Although generally I prefer to create custom interfaces for my databases using fully developed programming languages like Python or C#. I prefer to have full access to their computational capabilities and in particular their graphing even when the initial plan does not need these. Projects have a tendency of expanding over time. But when I need to quickly create a simple front end for a database I sometimes turn to MS Access.

One issue I have faced occasionally is being asked to make the same change to all of the forms or all of the reports. Fortunately, the VBA built into Access makes this relatively simple for certain types of changes. You can iterate through all forms, make the desired change, and then save that change so it is permanent.

This is best described with an example, though I have only tested this with Access 2010 and 2013. For this example, I am just going to create a database with one table and two forms. I will put the VBA code itself into a module. The code will look like:

Public Function ChangeBkColor()
    Dim obj As AccessObject 'Entities in Allforms are AccessObjects
    Dim CurForm As Form
    'We want everything to be closed to avoid any problems
    'with reopening later
    For Each CurForm In Application.Forms
        DoCmd.Close acForm, CurForm.Name
    Next CurForm
    For Each obj In Application.CurrentProject.AllForms
        'If not opened as design, it may not save the change
        DoCmd.OpenForm obj.Name, acDesign
        'There should only be one open form now
        Set CurForm = Application.Forms(0)
        'Change the color to red, but could change others
        CurForm.Section(acDetail).BackColor = RGB(100, 0, 0)
        'Setting acSaveYes here is significant
        DoCmd.Close acForm, CurForm.Name, acSaveYes
    Next obj

End Function

I normally execute by highlighting the code in the VBA editor and then pressing F5, but it could also be tied to a macro if that is more convenient. Executing it then opens each form, changes the property in question, and then closes it again, saving the change. Using techniques like this has saved me a fair bit of time in trying to standardize certain things across numerous forms in a database.



Keep it short

I have been thinking about resumes a great deal lately. Since I recently passed the Bar Exam, I have been revising and sending out my resume in search of a legal job. At my current job, I have also fairly recently reviewed several resumes from new applicants. I cannot claim to be a resume expert, and I will not try to give any detailed advice. But there is one thing that many people that have given me advice have all agreed on, and that I wish the people sending in resumes that I have to review had remembered.

Keep it short.

When I looked into resumes before while looking for technical jobs and more recently when I was looking into them for legal jobs, virtually everyone recommended keeping them short. When I review resumes, I distinctly prefer them short. Exactly what short means varies somewhat by who is giving the advice, but some of them recommend strictly keeping it down to one page and a few others say that two is acceptable. When I have reviewed resumes at different times, a second page has never bothered me. But I have been rather annoyed when I have reviewed resumes as long as sixteen pages.

Generally, keeping it short will help ensure that the important information is easy to find and review. The unimportant information does not need to be there at all. There are occasions when a potential employer wants something longer and more detailed. However, those employers will generally make that clear by asking for a C.V. or even a portfolio rather than a resume. With a bit of help from a skilled counselor at my law school, I ruthlessly trimmed my resume down to one page, and I think it is stronger now that it is shorter.

Dealing with SQL Server LOCK resource errors.

I was recently doing some testing that required a fairly large table. I created the test table, and set it up to insert about 30 million rows. I got an error that read:

The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions.

Naturally, having not encountered that before, I started by Googling the error and looking up Error 1204 and Microsoft’s Support Article. The support article helpfully suggested setting locks to 0 using sp_configure. But that is the default value and mine was already set that way.

So, I created a simplified setup that would reliably produce the error based on Jeff Moden’s Tally Table Article.

if OBJECT_ID('BigTallyTable', 'U') is not NULL

drop table dbo.BigTallyTable

create table BigTallyTable
(n BigInt primary key)

insert into dbo.BigTallyTable(n)
select top (30000000)
row_number() over (order by as n
from master.dbo.syscolumns s1,
master.dbo.syscolumns s2

And I started playing with it. Since this all tied to locks, one thing I did was break it up into batches:

declare @maxValue int, @maxperround int, @last int
set @maxValue = 30000000 --30 Million
set @maxPerRound = 500000

set @last = 0
while @last < @maxValue

if @maxPerRound > @maxValue - @last
set @maxPerRound = @maxValue - @last

insert into dbo.BigTallyTable(n)
select top (@maxPerRound)
row_number() over (order by + @last as n
from master.dbo.syscolumns s1,
master.dbo.syscolumns s2

Print 'Finished a round with @last = ' + cast(@last as varchar) + ' and @maxPerround = ' + cast(@maxPerRound as varchar)

set @last = @last + @maxPerRound

This avoided the error and generated the results I was looking for. On a multi-user system (I was doing this on my test instance of SQL which is on my personal machine) it would have also allowed other queries that were waiting for locks to be able to execute more freely, though whether that is necessarily good or not depends on the circumstances.

I also got it to lock by directing SQL Server to take a table lock right from the beginning.

insert into dbo.BigTallyTable with (TABLOCK) (n)

select top (30000000)
row_number() over (order by as n
from master.dbo.syscolumns s1,
master.dbo.syscolumns s2

This ran substantially faster than splitting it into batches did and is clearly a simpler query. It does take a more aggressive table lock which, as the article on lock escalation points out, can cause more problems with blocking other queries if that is a concern.

Order of a Result Set in SQL

I have seen people build procedures that rely on a result set being returned in a certain order.  This is fine if an order by clause is used, but SQL Server will not guarantee the order of the results without one.

It is interesting to look at the way SQL Server normally returns a result set though.  So, lets make a small sample table:

create table orderTest (n varchar(15))

insert into orderTest (n)
values  (2),

If we run a simple “select n from orderTest” the results I get back are 2, 1, 3, 1.  In other words, it returns the results in the order they were inserted.  This is relatively consistent.  I have tried similar tests on multiple versions of sql on multiple machines.  As I said at the beginning, I have seen some programs rely on the data being returned in the order it was inserted and those programs have gone for extended periods of time without any problems.  But I want to emphasize that SQL Server will not guarantee the order without an order by clause.  If you want to be able to guarantee that you can get the data back in the order that it was inserted in then you can add an identity column and order by that.

The tendency to return things in the order they were inserted seems fairly stable with simple where clauses, but asking SQL Server to return distinct results will change the order as it checks for duplicates.  For instance “select distinct n from orderTest” gives me 1, 2, 3.

And SQL Server seems to return the results of a union all by just concatenating the results of the two queries.  So

select n
from orderTest
select n
from orderTest

Gives 2, 1, 3, 1, 2, 1, 3, 1.  Essentially, the contents of the table repeated twice.  But union makes the results distinct and so it changes the order.  Redoing that with union  instead of union all gives 1, 2, 3.

Indexes will also affect the order.  You may notice the tests were all done on a heap with no indexes.  But if I add an index (either clustered or nonclustered) it makes the order for “select n from orderTest” become 1, 1, 2, 3.  Of course, you cannot always rely on the data being returned in index order either, unless there is an order by clause.  Similarly, as Conor Cunningham pointed out, the size of the dataset can affect it and break apparent order as the optimizer seeks to increase parallelization.

In short, SQL Server has certain ways it will normally return data and it can be helpful to be aware of those.  Even small changes to the query or changes in the size of the data can have a major impact on the order the results are returned.  Without an order by clause, SQL is not guaranteed to return the data in the way it normally does and you cannot rely on that order.  If the order actually matters, use an order by clause.

SQL Server Three Valued Logic and Ansi Nulls

SQL Server uses a three valued logic with True, False, and Unknown. And, normally, SQL Server Nulls are not comparable. Together that means that this little script:

if Null = Null -- Results in Unknown
print 'Num 1'

if Null != Null -- Results in Unknown
print 'Num 2'

if not (Null = Null) or not (Null != Null) -- Results in Unknown
print 'Num 3'

normally produces no results. It simply prints the standard ‘Command(s) completed successfully.’ One significant implication of this is that if the where clause includes something like

[column_name] = null

It will filter out all rows. The best way to look for Nulls is to use

[column_name] is null

But, whether or not Nulls are comparable can be controlled by the use of the Ansi_Nulls session setting. The default for Ansi_Nulls is on, and in fact the documentation warns that ANSI_NULLS may always be ON, with no way to turn it off, in future versions. Until then, this test script:

Set Ansi_nulls off

if Null = Null -- Now Results in True
print 'Num 1'

if Null != Null -- Now False
print 'Num 2'

if not (Null = Null) or not (Null != Null) -- Now True
print 'Num 3'

Dutifully prints out:

Num 1
Num 3

It is generally best to leave Ansi_nulls in its standards compliant default setting and use “is null” or “is not null”, but occasionally it can be useful to change that.

[20 Aug 13 – Updated to add tags and change category]

Variable Scope in SQL Server

The way variable scoping works in SQL Server has been well documented on MSDN, but I have seen related issues come up on forums recently so I decided to go over it.  Generally, a variable in T-SQL is available between when it has been declared and the end of the batch, function, or stored procedure that it is defined in.  That is of course fairly straightforward, but it is easy to miss some of the subtleties on what constitutes a batch.

Go creates a new batch.

The most explicit way to end a batch is with the GO command.  So for instance:

declare @var varchar(10)

set @var = 'Hi'

print @var --This works

GO --End the batch

print @var --Generates an error

Gives results of:


Msg 137, Level 15, State 2, Line 2
Must declare the scalar variable "@var".

Each Execute is in a new batch.

When using SSMS, every call to execute generates a new batch.  So, for instance if you highlight:

declare @var varchar(10)

set @var = 'Hi'

print @var --This works

In SSMS and press F5 (or Execute under the Query Menu) then it is all one batch and the print statement will execute.  If you then immediately highlight that print statement again and press F5 again, it will give the “Must declare the scalar variable” because it is executing in a new batch.

A transaction can include multiple batches.

An explicit transaction can include multiple batches.  So, you can open an explicit transaction, declare a variable within it, start a new batch and so lose access to the variable before the transaction commits.  So, to re-use the first example with a slight tweak:

begin transaction

declare @var varchar(10)

set @var = 'Hi'

print @var --This works

GO --End the batch, but not the transaction

print @var --Generates an error since out of batch

--this particular error won't roll back the transaction, but ends the batch

Results in this message:


Msg 137, Level 15, State 2, Line 2

Must declare the scalar variable "@var".

It got that error because the GO command ended the batch.  The Level 15 error generated ends the execution but not the transaction, so looking @@trancount at this point will return one and we could proceed to commit or rollback the transactions (which doesn’t actually matter in this case since it didn’t affect anything).

A procedure has a different scope even if called within the same batch.

Even if it is called as part of a longer batch, a procedure has its own scope.  So a procedure can’t directly access other variables in a batch that aren’t passed to it as a parameter and it can’t affect variables outside of it except through output parameters. So if we create a procedure just for testing like:

alter procedure SillyProc


declare @sillyvar varchar(5)

set @sillyvar = 'Hi'

print @sillyvar

then when it is executed there won’t be an @sillyvar available even if it is in the same batch.  Thus this:

exec SillyProc

print @sillvar --generates an error

Would give the familiar “Must declare” error.

Each execution from an external program is in a new batch.

Generally, each call to execute from a separate program will be in a new batch.  For a python example:

import pyodbc

sqlConnStr = ('DRIVER={SQL Server};Server=(local);Database=TestDb;'+


sqlConn = pyodbc.connect(sqlConnStr)

curs = sqlConn.cursor()

sql = """

declare @pyvar varchar(10)

set @pyvar = 'HI!'"""

curs.execute(sql) #This works

sql = "set @pyvar = 'retry'"

curs.execute(sql) #this will give an error.

curs.commit() #Won't get here.

This will give a “Must declare the scalar variable “@pyvar”.” error when it reaches the second execute statement because that is in a new batch and that new batch does not have the variable declared.


The scope of a variable is generally straightforward, but it can have some subtleties.  Essentially, you can use a variable within the function, procedure, or batch it is declared in.

(Edited 17 Feb. 2013 to correct a type in a sourcecode tag.)

A really simple Multiprocessing Python Example

Purpose and introduction

A Python program will not be able to take advantage of more than one core or more than one CPU by default.  One way to get the program to take advantage of multiple cores is through the multiprocessing module.  There are lots of excellent references and tutorials available on the web (links are included at the bottom), but one thing I was not able to find back when I first started using multiprocessing was a detailed look at an extremely simple, but still practical, example.  Sometimes, it is useful when dealing with a new technique to see it in a very simple form, but not so simple as some of the completely contrived examples in the library documentation.

So, here is a look at an extremely simple example using an embarrassingly parallel issue: generating the Mandelbrot set.  The algorithm used is basically a direct adaptation of the one presented in pseudo-code on Wikipedia, grouping the pixels into rows to make it easier to pass off to the multiprocessing.  Just to be clear, this is far from the fastest or best or most elegant way to use Python to calculate the Mandelbrot set.  It does provide a fairly good springboard for using multiprocessing while still doing actual work.

Essentially this provides a straightforward example with explanations of processing a function against a list of arguments using multiprocessing and then gathering those together into a list.

A look at a single processor version

Here is the code for a single processor version:

import matplotlib.pyplot as plt
from functools import partial

def mandelbrotCalcRow(yPos, h, w, max_iteration = 1000):
    y0 = yPos * (2/float(h)) - 1 #rescale to -1 to 1
    row = []
    for xPos in range(w):
        x0 = xPos * (3.5/float(w)) - 2.5 #rescale to -2.5 to 1
        iteration, z = 0, 0 + 0j
        c = complex(x0, y0)
        while abs(z) < 2 and iteration < max_iteration:
            z = z**2 + c
            iteration += 1

    return row

def mandelbrotCalcSet(h, w, max_iteration = 1000):
    partialCalcRow = partial(mandelbrotCalcRow, h=h, w=w, max_iteration = max_iteration)
    mandelImg = map(partialCalcRow, xrange(h))
    return mandelImg

mandelImg = mandelbrotCalcSet(400, 400, 1000)

The modifications needed to use multiprocessing

Obviously, to use multiprocessing, we need to import it, so towards the top, we add:

import multiprocessing

The mandelbrotCalcRow function can remain unchanged.  The main changes are to the mandelbrotCalcSet function, which now looks like:

def mandelbrotCalcSet(h, w, max_iteration = 1000):
    #make a helper function that better supports by using only 1 var
    #This is necessary since the version
    partialCalcRow = partial(mandelbrotCalcRow, h=h, w=w, max_iteration = max_iteration)

    pool =multiprocessing.Pool() #creates a pool of process, controls worksers
    #the only accepts one iterable, so use the partial function
    #so that we only need to deal with one variable.
    mandelImg =, xrange(h)) #make our results with a map call
    pool.close() #we are not adding any more processes
    pool.join() #tell it to wait until all threads are done before going on

    return mandelImg

Here, Pool creates the pool of processes that controls the workers.  It gets the environment ready to run multiple tasks.  One of the easiest ways to use the pool is to use its map.  That takes a function and an iterable of parameters.  That function is then called for each parameter in the iterable and results are put into a list, distributing the calls over the available threads.

One significant difference between and the built-in map, other than the fact can take advantage of multiple processors, is that will only take a single iterable of arguments for processing.  That is why I created a partial function which freezes the other arguments.

Pool.close() then informs the processor that no new tasks will be added that pool.  Either pool.close or pool.terminate need to be called before pool.join can be called.  Pool.join stops and waits for all of the results to be finished and collected before proceeding with the rest of the program.  This gives a simple way to collect the results into a single list for use later.

The other significant change is that the main portion, the entry-point of the script, needs to be wrapped with a  “if __name__=’__main__’ conditional on Windows.  This is because the main module needs to be able to be safely imported by a new python interpreter.  Not doing this can result in problems such as a RuntimeError or completely locking up the system in some of the tests I tried.  This, and a couple of other caveats, are mentioned in the Programming Guidelines.

So, the entry point now looks like:

if __name__=='__main__':
    mandelImg = mandelbrotCalcSet(400, 400, 1000)

In this example, the multiprocessing version only has 8 additional lines of code (its 15 lines longer, but 7 of those lines are additional whitespace or comment lines I added to make it easier to read).  But it runs in less than a third of the time.

Of course, it is worth remembering the saying that “premature optimization is the root of all evil.”  It is normally smart to get the code working first, and then consider bringing in multiprocessing options.

And the results:

Some related links.

  1. Multiprocessing Docs
  2. The examples in the documentation.
  3. has an example of creating the Mandelbrot set using Cython.  For actually generating the set rather than just making examples for multiprocessing, that version is much better.
  4. has a good discussion of parallel programming with numpy and scipy.


{Edit 10 Jan 13 – Corrected a minor spelling error.}
{Edit 20 May 14 – Corrected typos.