Three wheels on my wagon, but how many SQLServers on my network ?

Three wheels on my wagon, but how many SQLServers on my network ?

This post is in direct response to a question I was asked today, and I thought I\’d post it up because:

1.      My cell phone doesn\’t get on with social networking sites

2.      If I write it down, I stand a fighting chance of remembering it

and

3.      Hey, its Thursday. Who needs and excuse to blog anyway ?


The question posed was in relation to the SQLCMD command-line switch -L, which (according to MSDN and others) will tell you the names of all SQLServers on your network. Really ? Actually, not quite, which is why the question was asked in the first place. When run from a laptop, none of the actual real servernames were returned, just local instances on colleagues\’ laptops. To clarify, that means MyLaptop\\INSTANCE1 was visible, but ServerInDataCenter\\INSTANCE1 was not.


Let me be the first to admit that I\’m no expert in all matters SQLServer (well, *some* matters, but definitely not all), so what follows is mainly conjecture, but it all hinges around what you consider to be *your* network.


Most corporate entities these days will have workstation IP address ranges (typically 192.168.x.x with a subnet mask of 255.255.x.x), and will also have different server IP address ranges (typically 10.1.x.x with various masks etc).


Whilst both of these addresses relate directly to NICs (in most cases) that are physically connected by copper / fibre, basic principles of TCP/IP state that they are not on the \’same\’ network.


SQLCMD -L will look around your \’local\’ network, 192.168.x.x, and will get all the SQLServers broadcasting on that network ONLY. To get to the physical servers on other network segments, you\’d have to use DNS to resolve the hostname to IP address, and then you can connect (after passing through several gateways etc.).


It is my postulation that SQLCMD won\’t use DNS to break out of the local network when used with the -L switch, and therefore cannot return details of any remote servers. So, if you run it on a laptop, you get one set of results, and if you run it on a server, you get another.


Which sort of answers the question. Sort of. At least, I hope it helps out a little.


Back soon…


End-User Computing – "Let\'s go Shopping !"

End-User Computing – \”Let\’s go Shopping !\”

First, let me say that I\’ve never been to a Wal-Mart. Ever. And, bluntly, I don\’t intend to go to one either – they\’re full of very scary things (if the Internet is to be believed).


Secondly, in case you were wondering, this post is decidedly non-technical. What you\’re about to read is an opinion, a postulation, an analogy if you will, so I crave your indulgence on this one. Here goes.


I am, like most people, familiar with the concept of \’everything for everybody\’. What I\’m not so keen on is how some organisations may wish to physicalise that concept.


In the case of the larger \’super-centers\’, there\’s a whole load of stuff behind those walls. Some of it wonderful, some of it weird, and some of it of questionable value. All you have to do is enter the temple through those oh-so-quiet and oh-so-inviting doors, and nirvana lies within. Simples.


Where am I going with this ? I can see the consternation from here. (And, seriously, you consider THAT suitable attire for surfing the internet ? You never know who\’s watching)


Bear with me.


So, you go into the store. You wander around looking for stuff you need. You buy just what you need, perhaps some stuff that you didn\’t know you needed, and an air-freshener that was on offer, as you quite like the fragrance. But you know you\’re not going to get everything in the cart, and you can always come back another day for the stuff you missed or that was not yet in stock.


If you\’re a careful shopper, you\’ve organised the cart contents in some sort of logical order that would help you when you get to the checkout, and ultimately taking your booty back to your chosen transport method (or is it just me that does this ?)


Having \’shopped-out\’, you queue up at the checkout, and you occasionally sneak a glance at the other shoppers carts to see what they\’ve got that you didn\’t (again, is that just me ?).


What you\’ll see there is that some people aren\’t as fastidious as you when it comes to loading up the cart. There\’s beer on top of the bread and eggs, a sack of potatoes on top of the pastries, and it seems that the ketchup is making a last bid for freedom from the sides of the squeezy bottle.


There\’s also always someone who\’s got something that you wouldn\’t normally think you could buy in a superstore. Like a tractor. Whatever – as long as it\’s in the cart, they can buy it, right ?


Notwithstanding the state of the queue, there\’s a guy in front of you having some serious difficulty getting his stuff onto the conveyer belt. You want to do the good neighbor thing and help him out, but your cart has got stability issues, and if you let go of it for a second, it\’ll go over. The you\’ll have the embarrasment of calling a GC (sorry, janitor) over to pick up your stuff while you go and get some more. So you stay put. Wasting time.


Finally, the guy in front gets all his stuff on the belt. Yay ! Only, his credit card gets declined because he\’s way over his limit, and he\’s only got enough cash for about half the stuff he\’s bought. So him and the cashier start sifting through what he\’s got, call over another assistant to help out, then another to remove items that he certainly doesn\’t want back to the shelves they came from. All of this adds pressure to the other lines by diverting staff to yours, so the other shoppers start to get a bit grumpy too.


After some time, and some indignant huffing by you (and by now the several dozen others in your particular queue), the guy completes the sale, and trundles off without a care in the world, oblivious to the chaos and confusion that he\’s left behind.


And, dear reader, THAT is precisley why THIS DBA is very VERY nervous about \”End-User Computing\”. Thanks for reading.


Back soon…



SQL by the seaside, ‘Hug-A-DBA’, and a little philosophy

So, that’s another SQLBits event over. I say ‘another’, but really it’s my first one – there have been seven others before this that I have not attended (to my eternal shame).

There were some great sessions delivered by some great people, sharing experiences and chatting with friends old and new.

Look at me – I’ve gone all mushy over this. I’m genuinely sad to be leaving what has been touted as Europe’s Premier SQL Server Community Event, but at the same time I’m happy that the community is as strong as it is, and I’ll move heaven and earth to get to another one.

From all the excellent sessions I’ve attended, from the training day to the community day, there’s two stand out things as ‘take-aways’ for me.

Take-away #1:  Really think about what it is that you’re trying to do, and learn from your previous missteps. The product stack that is SQL Server really CAN do anything, and do it well. As long as you understand the fundamentals, you can’t go far wrong by applying a little common sense and a whole load of real-world experience to it.

Take-away #2:  Talk. To anyone who wants to listen. Or even (like this blog) to yourself. You’ll be amazed what people have to offer, even if it’s just an amusing story about making Bloody Marys with ketchup (True story – ask me about this if we ever meet up). I guess that’s something we could all do a bit more in our non-SQL lives as well.

Someone I know suggested that DBAs should be classed as a social minority, but really we’re all just people who happen to be interested in data, and, whilst we’re definitely in a very small minority, that doesn’t mean we’re not humans first and foremost.

That’s why community events like SQLBits are so important – they satisfy the inner geek, and remind us that we all just need a little validation every once in a while. A shared experience of the world, if you will.

There’s a reason this post contains the word ‘philosophy’ as well. Sure, there’s the bit about being a better person and being nice to everyone (including developers), but the word comes from the greek for ‘Lover of Knowledge’, and I certainly love the knowledge (geek and non-geek) I got here.

And that’s certainly something worth talking about.

Back soon….

CTEs, MERGE, and a couple of broken rules

CTEs, MERGE, and a couple of broken rules

I\’ve thought about this next post for quite some time. Not the content, but whether I should post it up on the internet or not.

You see, I\’ve got a few rules about blogging, and if you\’re reading this, then I\’ve just broken a couple of them.

Rule 1 was \”Always remove any details that could be used to identify a specific system, person or organisation\”. I\’ve broken this a little, because I\’ve not altered the scripts herein one iota, but (if you\’ve read other articles in this blog) you already know I dabble in Sharepoint a little. And, in my defence, the scripts are specific to a setup for a solution that may not eventually exist, but I think they do illustrate the point quite well (feel free to contradict me on that).

Rule 2 was \”never post the same content internally and externally\”. This one is well and truly shattered, since I\’m now reproducing the content of a post I did internally in its entirety. But, again, I couldn\’t think of another way of saying what I wanted to, so I was kind of caught between a rock and…. another rock.

I\’ve got other rules as well (which I\’m not going to share right now), but hopefully you\’ll see what I was getting at here, so I crave your forgiveness for giving myself a hard time over this.

And here\’s that post in full –>

Some of you may have seen a fairly unsuccessful but mercifully short demo I did recently about using a CTE to find mismatches between new (source) and old (target) data sets, and then update the target when the data has changed.

The premise was basically to avoid complex JOINs, NOT IN, and various other unsavoury constructs that can cause performance issues, especially when the NOT IN contains a sub-select.

The script looked pretty much like this :

WITH Mismatches (dbName, urlPath)
AS
(SELECT a.dbName, a.urlPath
FROM @t_extract a
INNER JOIN SharepointMaintenance.dbo.SharepointSites b
ON a.urlPath = b.FullURL
INNER JOIN SharepointMaintenance.dbo.ContentDBs c
ON a.dbName = c.Name
WHERE a.dbName != c.Name
AND a.urlPath != \’/\’)
UPDATE SharepointMaintenance.dbo.SiteTracking
SET [dbID] = DB_ID(a.dbName),
extractDate = a.extractDate,
lastDBId = c.ContentDBId,
lastUpdate = GETDATE()
FROM @t_extract a
INNER JOIN SharepointMaintenance.dbo.SharepointSites b
ON a.urlPath = b.FullURL
INNER JOIN SharepointMaintenance.dbo.ContentDBs c
ON a.dbName = c.Name
INNER JOIN Mismatches d
ON b.FullURL = d.urlPath;

So (once you\’ve set up properly), this would update existing rows with changed data, and you\’d still have to cater for completely *new* data somewhere else. Bummer.

Having taken a step back from this for a few weeks, and then having another go (there\’s always a more elegant solution, don\’t you think ?), I thought about using the new SQL2008 MERGE statement.

How hard could it be ? Put very simply, MERGE compares your target and source, works out which ones match, and then does different things for matched and unmatched rows depending on what you tell it to do.

The re-write would look something like the next snippet. The more observant among you will notice that I\’ve fundamentally changed the data that I\’m interested in, but that\’s because my initial analysis was tragically flawed. And it\’s my demo.

MERGE SharepointMaintenance.dbo.SiteTracking AS TARGET
USING (SELECT [dbID], SiteID FROM @t_extract2 ) AS SOURCE ([dbID], SiteID)
ON (TARGET.[SiteID] = SOURCE.[SiteID])
WHEN MATCHED THEN –Update tracking table with last and current dbID
UPDATE SET [dbID] = SOURCE.[dbID],
extractDate = @datenow2,
lastDBId = TARGET.[dbID],
lastUpdate = TARGET.extractDate
WHEN NOT MATCHED THEN –Insert new row to tracking table, so we\’ve got the new site ID
INSERT ([dbID], SiteID, extractDate)
VALUES (SOURCE.[dbID], SOURCE.SiteID, @datenow2);

This second construct does both the UPDATE (for changed data) and the INSERT (for new data), so there\’s no futher coding required.

Apart from all the other stuff you\’ll need to put around it, of course.

Back soon…. (and thanks for reading)