Tuesday 29 November 2016

Find letters, numbers and special characters from a column in sql server

Suppose we have a table in that a column having mix dates (I mean it has letters, number and special character s). We need find out.
Suppose having the below data
DECLARE @table table (id int identity(1,1),
value nvarchar(100))
INSERT INTO @table
VALUES ('B65bfNM#*amF5*t'), ('4GZ_3h^2STJTZVG')
, ('4%bKU&hxgj7!3!t'), ('uX=g3PE+w&uVV!h')

With the help of below sql script we will get it
DECLARE @table table (id int identity(1,1), value nvarchar(100))
INSERT INTO @table
VALUES ('B65bfNM#*amF5*t'), ('4GZ_3h^2STJTZVG')
, ('4%bKU&hxgj7!3!t'), ('uX=g3PE+w&uVV!h')


SELECT ID,value,
    (SELECT
        SUBSTRING(value, number, 1)
    FROM master..spt_values
    WHERE SUBSTRING(value, number, 1) LIKE '[A-Z]' AND type = 'P'
    FOR xml PATH ('')) as [Letters],
                (SELECT
        SUBSTRING(value, number, 1)
    FROM master..spt_values
    WHERE SUBSTRING(value, number, 1) LIKE '[0-9]' AND type = 'P'
    FOR xml PATH ('')) as [Numbers],
                (SELECT
        SUBSTRING(value, number, 1)
    FROM master..spt_values
    WHERE SUBSTRING(value, number, 1) LIKE '[^0-9]'
    AND SUBSTRING(value, number, 1) LIKE '[^A-Z]' AND type = 'P'
    FOR xml PATH ('')) as [special characters]
FROM @table

See the output
  


Find the prime number using sql script

With the help of below sql script we will find the number is prime number or not.
DECLARE @Number AS INT = 7

SELECT
    CASE WHEN COUNT(DISTINCT num1.number) > 0
                THEN cast(@Number as varchar(10))+  ' is Not a Prime Number'
                ELSE cast(@Number as varchar(10)) +  ' is Prime Number' END [Check Prime]
FROM MASTER..SPT_VALUES AS num1
WHERE num1.Number > 1
      AND  num1.Number < @Number
      AND  @Number % num1.Number = 0

See the output
  
See with another example





Saturday 26 November 2016

Loading xml file Data in database in sql server

Suppose we have XML file and want to load that data in database. We can load documents into SQL Server using the OPENROWSET option. This will load the XML file into one large rowset, into a single row and a single column. This rowset can then be queried using the OPENXML function. The OPENXML function allows an XML document to be treated like a table.

Let’s see the example

I am creating a table

create table product
(
ProdictID varchar(10),
Name varchar(50),
ProductNumber varchar(50),
SafetyStockLevel varchar(50),
RecorderPoint varchar(50)
)
I have a XML file
 
I want to load this file to the database.

-- Loading the xml file
Declare @XmlFileData  Varchar(MAX)
Select @XmlFileData = BulkColumn
from OPENROWSET(Bulk 'G:\textxml.xml',Single_blob) xData;

-- Parse the XML into object
Declare @XmlData XML
SET @XmlData = CONVERT(XML, @XmlFileData, 1);

-- select the parsed XML data into our table
insert into test.dbo.product
SELECT
XmlData.value('ProdictID[1]','VARCHAR(1000)') as ProdictID,
XmlData.value('Name[1]','VARCHAR(1000)') as Name,
XmlData.value('ProductNumber[1]','varchar(1000)') as ProductNumber,
XmlData.value('SafetyStockLevel[1]','VARCHAR(1000)') as SafetyStockLevel,
XmlData.value('RecorderPoint[1]','VARCHAR(1000)') as RecorderPoint
FROM @XmlData.nodes('/product/row') as
x(XmlData)


When used with the BULK provider keyword you can name a data file to read as one of three types of objects:
Ø  SINGLE_BLOB, which reads a file as varbinary(max)
Ø  SINGLE_CLOB, which reads a file as varchar(max)
Ø  SINGLE_NCLOB, which reads a file as nvarchar(max)
OPENROWSET returns a single column.
Execute the sql script data will be inserted into the table.
 
See it into the table




Thursday 24 November 2016

Loading CSV file Data in database using Bulk insert in sql server

I am getting daily CSV file from the client and my job is to load that data in sql server table. CSV stands for Comma Separated Values, sometimes also called Comma Delimited Values.

Syntax of Bulk Insert

BULK INSERT
[Database name]. [ schema name ][ table name ]
FROM 'CSV file full path'
[ WITH
(
[ [ , ] BATCHSIZE =batch size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'codepage' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] FIRSTROW = first row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE ='format_file_path' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH =kilobytes_per_batch ]
[ [ , ] LASTROW =last_row ]
[ [ , ] MAXERRORS =max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH =rows_per_batch ]
[ [ , ] ROWTERMINATOR ='row_terminator' ]
[ [ , ] TABLOCK ]
[ [ , ] ERRORFILE ='file_name' ]
)]

BATCHSIZE

We can define the number of rows in a batch. Each batch is copied to the server as one transaction. If this fails, SQL Server commits or rolls back the transaction for every batch. By default, all data in the data file is one batch.

CHECK CONSTRAINTS

All constraints on the target table must be checked during the bulk-import operation. Without this any check constraints are ignored.

CODEPAGE

It specifies the code page of the data in the data file.

FIELDTERMINATOR

 Here we specify the field terminator to be used for char and wide char data files. The default field terminator is \t (tab character). If the delimiters are (, : | ) here we need to define.

FIRSTROW

Suppose we want to skip some rows. Here we need to define.

FIRE_TRIGGERS

If we have insert trigger on the destination table and want to trigger after inserting the data during the bulk import operation. If we define FIRE_TRIGGERS it will fired for every completed batch. If we do not define the FIRE_TRIGGERS the after operation completed trigger not will trigger.

KEEPIDENTITY

Data files that contain identity values can be bulk imported into an instance of Microsoft SQL Server. By default, the values for the identity column in the data file that is imported are ignored and SQL Server assigns unique values automatically. The unique values are based on the seed and increment values that are specified during table creation.

KEEPNULLS

It specifies that empty columns should retain a null value during the bulk-import operation, instead of having any default values for the columns inserted.

KILOBYTES_PER_BATCH

It specifies the approximate number of kilobytes (KB) of data per batch as kilobytes_per_batch.

MAXERRORS

It specifies the maximum number of syntax errors allowed in the data before the bulk import operation is canceled. If it was not specified, the default is 10.

ROWTERMINATOR

Here we specify the row terminator to be used for char and wide char data files. The default row terminator is \r\n (newline character).

TABLOCK

It specifies that a table-level lock is acquired for the duration of the bulk-import operation.
ERRORFILE
 We can specify the file used to collect rows that have formatting errors and cannot be converted to an OLE DB row set. These rows are copied into this error file from the data file "as is." The error file is created when the command is executed. An error occurs if the file already exists. Additionally, a control file that has the extension .ERROR.txt is created. This references each row in the error file and provides error diagnostics. As soon as the errors have been corrected, the data can be loaded.

Let’s see the example

I am creating a table


create table product
(
ProdictID varchar(10),
Name varchar(50),
ProductNumber varchar(50),
SafetyStockLevel varchar(50),
RecorderPoint varchar(50)
)
I have a csv file
  
I want to load this file to the database.
See the syntax of Bulk insert above

BULK INSERT product
FROM 'G:\Productcsv.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW =2
)

 
Data is loaded successfully in database. Now we see the result.
 


Wednesday 23 November 2016

Calculating the running Average in sql server

See the running Average
Number
Running total
Running Average
10
10
10
5
15
7.5
20
35
11.67
15
50
12.5
5
55
11
10
65
10.83

Let’s see who calculate the running total in sql server with the help of query

We have a table Employee.
Create Table Employee
(
EmpID  int identity(1,1) primary key ,
EmpName varchar(50),
Gender varchar(10),
City        varchar(30),
Salary int
)
Creating the Employee table
Now I am inserting some records.
Insert Into Employee Values ('Bagesh',  'Male','pune',5000)
Insert Into Employee Values ('Kumar',   'Male','Chennai',4500)
Insert Into Employee Values ('Prema',   'Female','Pune',                5500)
Insert Into Employee Values ('Fatima',   'Female','Mumbai',4000)
Insert Into Employee Values ('Ajit',          'Male','Patna',3500)
Insert Into Employee Values ('Mala',       'Female','patna',5000)
Insert Into Employee Values ('Manish',  'Male','pune',6500)
Insert Into Employee Values ('Kajal',       'Female','chennai',7000)
Insert Into Employee Values ('Mohan',  'Male','Mumbai',5500)
Insert Into Employee Values ('Vikash',   'Male','pune',5000)
Insert Into Employee Values ('Sonam',   'Female','Mumbai',15000)


See the table
 
There are multiple way to find the running total.

Using OVER clause

SELECT EmpId,EmpName, Gender, Salary,
AVG(Salary) OVER (ORDER BY EmpID) AS [Running Average]
FROM Employee
See the Output.
 
Suppose we want to find the running total according to the Gender use below sql script.
select empid,empname, gender, salary,
AVG(salary) over (partition by gender order by empid)
as [Running Average]
from employee

See the output
 
Suppose we want to find the running total city wise we need to use below sql script
select empid,empname, gender, salary,City,
AVG(salary) over (partition by City order by empid)
as [Running Average]
from employee
  

Using sub query

Second way to finding the Calculating the running sum
select e.empid,e.empname, e.gender, e.City,e.salary,
(select AVG(e1.Salary) from employee e1 where e1.EmpID<=e.EmpID)
as [Running Average]
from employee e
order by e.EmpID

See the output
 
Hope this will be helpful J

Popular Posts