Rubrik Buzzword Bingo Podcast

We recently recorded a podcast for Rubrik’s Buzzword Bingo podcast series where we discussed how the public cloud is changing the database protection architectures worldwide. Check it out!


CosmosDB Presentation Aug 2018 - Chicago SQL Server Users Group

Bob Pusateri is presenting this Thursday, August 9th, for the Chicago SQL Server Users Group in a new presentation called “Select Stars: A SQL Server DBA’s Introduction to Azure Cosmos DB“.  This meeting starts at 5:30pm at the Microsoft Technology Center in the Aon Center in downtown Chicago at 200 East Randolph Drive, Chicago IL.

Azure Cosmos DB has quickly become a buzzword in database circles over the past year, but what exactly is it, and why does it matter? This session will cover the basics of Azure Cosmos DB, how it works, and what it can do for your organization. You will learn how it differs from SQL Server and Azure SQL Database, what its strengths are, and how to leverage them. We will also discuss Azure Cosmos DB’s partitioning, distribution, and consistency methods to gain an understanding of how they contribute to its unprecedented scalability. Finally we will demonstrate how to provision, connect to, and query Azure Cosmos DB. If you’re wondering what Azure Cosmos DB is and why you should care, attend this session and learn why Azure Cosmos DB is an out-of-this-world tool you’ll want in your data toolbox!

You must RSVP by 5PM Tuesday (today!) to attend the meeting due to building security. RSVP here to go see this fantastic presentation!


Podcast - Database Protection in the Cloud Era

We’re proud to announce that we were part of a podcast recording with Rubrik recently where we discuss database protection in the public cloud era.

Today, companies are on a journey to modernize their infrastructures and meet the changing demands of businesses. The cloud is becoming a strategic part of this initiative due to its potential to unlock cost savings and increase agility. However, the cloud also brings a new set of migration and data protection challenges.

In this episode, Michael Fal (Rubrik), David Klee (Heraflux Technologies), and John Morehouse (Denny Cherry & Associates) explore the database landscape in the cloud era and share protection best practices.

The podcast is hosted over at SoundCloud. Check it out!


SQL Server Sessions at VMworld 2018 USA

We hope to see you all at this year’s VMworld 2018 USA conference where we are fortunate enough to have been selected to present four SQL Server-themed sessions!

The first is an all-day boot camp (VAP3768WU) with Oleg Ulyanov from VMware Corp. and David Klee from Heraflux where we take participants through a comprehensive journey of understanding how to manage, performance tuning, and business continuity planning for high performance SQL Servers on the VMware virtualization platform. It’s the Sunday before VMworld officially ramps up, and if you have infrastructure engineers attending this conference, please tell them to register for this boot camp.

Next, Thomas LaRock from SolarWinds (@sqlrockstar) and David will be presenting a session called “Performance Deep Dive for Demanding Virtual Database Servers” (VAP1425BU). We’ll be discussing advanced performance-related topics such as CPU scheduling and vNUMA, “right-sizing” for both performance and licensing, virtual disks and performance discrepancies, and general operational efficiency.

Next, Michael Corey from LicenseFortress (@michael_corey) and David will be presenting a session called “Database Virtualization (Monster VMs) with VMware vSphere: Doing IT Right” (VAP1296BU), which is an updated version of the continued track of performance-related tips and tricks. Come see us laugh our way through over a hundred slides packed full of things you can take back to your environments!

Finally, David has a solo session called “Database vDisks and You” (VMTN5520B), where he’ll talk about all of the various layers of queueing between the SQL Server database and the enterprise SAN, and how to measure and monitor these layers, plus discuss ways to performance tune each layer.

As always, it’s going to be a great show! Contact us if you want to meet up in person while at the show, and let’s geek out and talk shop! Register for this conference here, and make sure to enroll in the SQL Server on VMware boot camp!

 


Red-Gate SQL in the City Chicago 2018

Our very own Bob Pusateri is presenting as part of the Red-Gate SQL in the City conference at the Microsoft Technology Center in Chicago IL on October 26th!

SQL in the City Summit Chicago takes place at the Microsoft Technology Center, near Millennium Park, and will focus on the concept of Compliant Database DevOps.

With world-class speakers, thought-provoking sessions, engaging panel discussions, interactive workshops, keynotes, and time for networking, the Summit will support senior data professionals to deliver value quicker while keeping their data safe.

Key discussions:

  • Reduce manual tasks to free up teams to spend more time on development
  • Break down silos to enable Dev and Ops to work together to deliver value
  • Reduce the risk of downtime or compliance breaches going undetected
  • Processes and solutions that equally balance the need to release faster while ensuring data security

Who should attend?

The sessions are for C-level IT executives, directors, managers, and senior database engineers and administrators who work with the Microsoft Data Platform.

Professionals interested in learning about all things DevOps will learn what’s needed to effectively balance the demand for delivering software faster, while protecting and preserving business critical data.

From healthcare to finance, government to technology, today’s leaders should be embracing the benefits of DevOps and supporting the implementation of change across IT teams.

There is a limited number of tickets available for the Summit, to support effective networking and ensure a friendly and inspiring atmosphere for all

You’ll need to use an Access Code to register, please reach out to your regional Redgate Account Rep to obtain a code.

Get your access code and register here!


SQL Server on VMware Boot Camp at VMworld 2018

Heraflux is extremely proud to announce that we are presenting in conjunction with VMware Corp. an all-day boot camp for advanced SQL Server operations, performance tuning, and business continuity strategies at this year’s VMworld 2018 USA in Las Vegas on Sunday, August 26th, at the Mandalay Bay conference center.

David Klee from Heraflux will be joining Oleg Ulyanov from VMware Corp. to present this all-day intensive workshop.

Workshop Abstract: Learn the essential skills necessary to run SQL Server implementations on VMware vSphere. VMware expert Oleg Ulyanov will be joined by Microsoft MVP David Klee from Heraflux to cover best practices and optimal approaches to deployment, operation, and management of SQL Server on VMware vSphere. This technical workshop delivers real-life, instructor-led, live training and incorporates the recommended design and configuration practices for architecting business-critical databases on VMware vSphere infrastructure. Subjects covered in depth will include performance, monitoring , high availability and disaster recovery with failover cluster instances or availability groups, and advanced storage architectures.

Register for this exciting workshop as part of your VMworld conference admission ASAP, as seats are filling up! SQL Server is not a trivial business-critical application to manage in a virtual environment, and come learn many real-world tips and tricks for making these databases run as fast and smooth as possible.


Speaking at SQL Saturday Olso 2018

I always feel honored when chosen to present at a SQL Saturday event. Being selected is never a guarantee, especially these days when organizers seemingly have more speakers and abstracts to pick from than ever before. But I am just over-the-moon happy to share that I was picked to speak at SQL Saturday in Oslo, Norway coming up on September 1. Norway has been on my list of places to visit for years, and I really can’t wait. Thank you so much to the SQL Saturday Oslo organizing team for putting together an amazing schedule and for allowing me to be a part of it – this is going to be a fantastic event!

My presentation at this event is entitled “Select Stars: A SQL DBA’s Introduction to Azure Cosmos DB”. I’ve been working with Azure Cosmos DB for a while now, and it’s really an incredible product. It’s been generating lots of buzz as of late, but there are still plenty of DBAs out there who have yet to use it. This is an introductory-level session that focuses on explaining what Azure Cosmos DB is, how it works, what its strengths are, and how they can be leveraged. To anyone curious about Azure Cosmos DB, this is the session for you!

Registration for SQL Saturday Oslo is now open – register today before it fills up! If you would like to extend your SQL Saturday experience even further, they are also offering four pre-conference sessions on Friday August 31.

I am incredibly stoked to visit Norway and I hope I’ll see you there in just three short months!

(Syndicated from Bob’s home blog site at bobpusateri.com)


SQL Saturday Iowa City Precon June 22

We are extremely proud to announce that David Klee has been selected to present an all-new preconference training session on Friday, June 22, ahead of the SQL Saturday in Iowa City, Iowa. This all-day training class, entitled “Level Up Your Cloud Infrastructure Skills“, is designed to help you, the SQL Server professional, ramp up your knowledge so you know how to better manage workloads in the cloud.

Think infrastructure in the cloud is still just for sysadmins? Think again! As your organization moves into the cloud, infrastructure skills are more important than ever for DBAs to master. Expert knowledge of cloud-related infrastructure will help you maintain performance and availability for databases in the cloud. For example, know what an IOP is? How many does your database consume during a given day? Properly sizing a cloud database depends on your knowledge of this metric. Failure to properly configure storage performance at the time of deployment will slow down your SQL Server considerably. Come learn many of the key cloud infrastructure points that you should master as the DBA role continues to evolve in this all-day boot camp on ramping up your knowledge on cloud management!

RSVP for this exciting preconference training session here for just $125! Seats are limited, so register soon!


Importing Data With PowerShell and dbatools

I like to use public datasets for experimentation and presentation demos, especially data that people can easily understand and relate to. For some, keeping them up-to-date was a manual process of downloading files, loading tables, and merging. There are of course many better ways to do this, some of which are more automated than others. I could have simply used PowerShell to call bcp, or even just implemented an insert statement and some loops. Then I found dbatools, which has commands which enable me to do an even better job with far less work – just the way I like it!. Here’s how I now keep my datasets current:

Getting The Data

I’ll be using data from the City of Chicago’s Data Portal. They have a tremendous online resource with lots of public datasets available. One that I really like is their listing of towed vehicles. Any time the city tows or impounds a vehicle, a record gets added here and remains for 90 days. It’s very manageable, with only 10 columns and a few thousand rows. (As an added bonus, you can search for license plates you know and then ask your friends about their experience at the impound lot!)

Chicago’s data portal uses Socrata, which is a very well-documented and easy-to-use tool for exposing data. It has a wonderful API for querying and accessing data, but to keep things simple for this post we’re just going to download a CSV file.

If you’re on the page for a dataset, you can download it by clicking on “Export” on the top right and then selecting “CSV”. To avoid all that, the direct link to download a CSV of this dataset is here. Download it and take a look at what we’ve got using your spreadsheet or text editor of choice (mine is Notepad++).

Loading The Data

We’ve got our data, now let’s load it. I like to load the entire downloaded dataset into a stage table, and then copy new rows I haven’t previously seen into my production table that I query from. Here’s the script to create these tables:

-- CREATE STAGE TABLE

CREATE TABLE [dbo].[TowedVehiclesSTG](

 [TowDate] [date] NOT NULL,

 [Make] [nchar](4) NULL,

 [Style] [nchar](2) NULL,

 [Model] [nchar](4) NULL,

 [Color] [nchar](3) NULL,

 [Plate] [nchar](8) NULL,

 [State] [nchar](2) NULL,

 [TowedToFacility] [nvarchar](75) NULL,

 [FacilityPhone] [nchar](14) NULL,

 [ID] [int] NOT NULL

);

-- CREATE FINAL TABLE

CREATE TABLE [dbo].[TowedVehicles](

 [ID] [int] NOT NULL,

 [TowDate] [date] NOT NULL,

 [Make] [nchar](4) NULL,

 [Style] [nchar](2) NULL,

 [Model] [nchar](4) NULL,

 [Color] [nchar](3) NULL,

 [Plate] [nchar](8) NULL,

 [State] [nchar](2) NULL,

 [TowedToFacility] [nvarchar](75) NULL,

 [FacilityPhone] [nchar](14) NULL,

CONSTRAINT PK_TowedVehicles PRIMARY KEY CLUSTERED (ID)

);

Now for the magic – let’s load some data! The dbatools command that does all the heavy lifting here is called Import-DbaCsvToSql. It loads CSV files into a SQL Server table quickly and easily. As an added bonus, the entire import is within a transaction, so if an error occurs everything gets rolled back. I like to specify my tables and datatypes ahead of time, but if you want to load into a table that doesn’t exist yet, this script will create a table and do its best to guess the appropriate datatype. To use, simply point it at a CSV file and a SQL Server instance, database, and (optionally) a table. It will take care of the rest.

# Load from CSV into staging table

Import-DbaCsvToSql -Csv $downloadFile -SqlInstance InstanceName -Database TowedVehicles -Table TowedVehiclesSTG `

-Truncate -FirstRowColumns

The two parameters on the second line tell the command to truncate the table before loading, and that the first line of the CSV file contains column names.

Now the data has been staged, but since this dataset contains all cars towed over the past 90 days, chances are very good that I already have some of these tows in my production table from a previous download. A simple query to insert all rows from staging into production that aren’t already there will do the trick. This query is run using another dbatools command, Invoke-Sqlcmd2.

# Move new rows from staging into production table

Invoke-Sqlcmd2 -ServerInstance InstanceName -Database TowedVehicles `

-Query "INSERT INTO [dbo].[TowedVehicles]

SELECT

 [ID],

 [TowDate],

 [Make],

 [Style],

 [Model],

 [Color],

 [Plate],

 [State],

 [TowedToFacility],

 [FacilityPhone]

FROM (

 SELECT

 s.*,

 ROW_NUMBER() OVER (PARTITION BY s.ID ORDER BY s.ID) AS n

 FROM [dbo].[TowedVehiclesSTG] s

 LEFT JOIN [dbo].[TowedVehicles] v ON s.ID = v.ID

 WHERE v.ID IS NULL

) a

WHERE a.n = 1"
The ID column uniquely identifies each tow event, and the production table uses it as a primary key, however I have found that occasionally the dataset will contain duplicated rows. The ROW_NUMBER() window function addresses this issue and ensures each ID is attempted to be inserted only once.

Putting it all together

I’ve showed you how simple dbatools makes it to load a CSV file into a table and then run a query to load from staging into production, but the beauty of PowerShell is that it’s easy to do way more than that. I actually scripted this entire process, including downloading the data! You can download the full PowerShell script, along with a T-SQL Script for creating the tables, from my GitHub here.

Happy Data Loading!

This post was cross-posted from Bob’s personal technical blog at bobpusateri.com.


Bob Pusateri presenting at Chicago Suburban SQL Server Users Group

Bob Pusateri from our team is proud to be presenting at the Suburban Chicago SQL Server Users Group on April 17th at 6pm a session called “Locks, Blocks, and Snapshots: Maximizing Database Concurrency.”

Abstract: The ability for multiple processes to query and update a database concurrently has long-been a hallmark of database technology, but this feature can be implemented in many ways. This session will explore the different isolation levels supported by SQL Server and Azure SQL Database, why they exist, how they work, how they differ, and how In-Memory OLTP fits in. Demonstrations will also show how different isolation levels can determine not only the performance, but also the result set returned by a query. Additionally, attendees will learn how to choose the optimal isolation level for a given workload, and see how easy it can be to improve performance by adjusting isolation settings. An understanding of SQL Server’s isolation levels can help relieve bottlenecks that no amount of query tuning or indexing can address – attend this session and gain Senior DBA-level skills on how to maximize your database’s ability to process transactions concurrently.

RSVP for this event today!