how to Deal with billions of data in sql server? -


hi guys have sql server 2008 database along 30000000000 records in 1 major table. looking performance our queries. have done indexes. found can split our database tables multiple partitions data spread on multiple files , increes performance of query. unfortunatly functioning available in sql server enterprise edition. make unafortable us.

could guys suggest other way maintain , query performance.

eg. select * mymajortable date between '2000/10/10' , '2010/10/10' 

this query takes around 15 min retrieve around 10000 records.

a select * less efficiently served query uses covering index.

first step: examine query plan , , table scans , steps taking effort(%)

if don’t have index on ‘date’ column, need 1 (assuming sufficient selectivity). try reduce columns in select list, , if ‘sufficiently’ few, add these index included columns (this can eliminate bookmark lookups clustered index , boost performance).

you break data separate tables (say date range) , combine via view.

it dependent on hardware (# cores, ram, i/o subsystem speed, network bandwidth)

suggest post table , index definitions.


Comments

Popular posts from this blog

python - Scipy curvefit RuntimeError:Optimal parameters not found: Number of calls to function has reached maxfev = 1000 -

c# - How to add a new treeview at the selected node? -

java - netbeans "Please wait - classpath scanning in progress..." -