Tuesday, March 31, 2020

Techniques for TM1 TurboIntergrator Scripting


Most applications built with cognos tm1 training  require some way to perform data ETL which is an acronym for extract, transform, and load. It is a process that involves extracting data from an outside source, transforming it into some form to fit an operational need, and then loading that newly formatted data into a target.
Intellipaat provides career-oriented cognos tm1 online training in order to help you move ahead in your career.

TM1 TurboIntergrator review
TM1 TurboIntegrator is the programming or scripting tool that allows you to automate data importation, metadata management, and many other tasks.
Within each process, there are six sections or tabs. They are as follows:
  • Data Source
  • Variables
  • Maps
  • Advanced
  • Schedule
The Data Source tab
You use the Data Source tab to identify the source from which you want to import data to TM1. The fields and options available on the Data Source tab vary according to the Datasource Type that you select. Refer to the following screenshot:
c18

TurboIntegrator local variables:
  • DatasourceNameForServer: This variable lets us set the name of the data source to be used. This value can (and should) include a fully qualified path or other specific information that qualifies your data source.
  • DatasourceNameForClient: It is similar to DatasourceNameForServer and in most cases will be the same value.
  • DatasourceType: It defines the type or kind of data source to be used, for example, view or character delimited.
  • DatasourceUsername: When connecting to an ODBC data source, this is the name used to connect to the data source.
  • DatasourcePassword: It is used to set the password when connecting to an ODBC data source.
  • DatasourceQuery: It is used to set the query or SQL string used when connecting to an ODBC data source.
  • DatasourcecubeView: It is used to set the name of the cube view when reading from a TM1 cube view. This can be the name of an existing cube view or a view that the process itself defines.
  • DatasourceDimensionsubset: It is used to set the name of the subset when reading from a TM1 subset. This can be the name of an existing subset or a subset that the process itself defines.
  • DatasourceASCIIDelimiter: This can be used to set the ASCII character to be used as a field delimiter when DatasourceType is character delimited.
  • DatasourceASCIIDecimalSeperator: This TI local variable sets the decimal separator to be used in any conversion from a string to a number or a number to a string.
  • DatasourceASCIIThousandSeperator: This TI local variable sets the thousands separator to be used in any conversion from a string to a number or a number to a string
  • DatasourceASCIIQuoteCharacter: This TI local variable sets the ASCII character used to enclose the fields of the source file when DatasourceType is character delimited.
  • DatasourceASCIIHeaderRecords: It specifies the number of records to be skipped before processing the data source.
Two additional variables to be aware of are:
  • OnMinorErrorDoItemSkip: This TI local variable instructs TI to skip to the next record when a minor error is encountered while processing a record.
  • MinorErrorLogMax: This TI local variable defines the number of minor errors that will be written to the TM1ProcessError.log file during process execution. If this variable is not defined in the process, the default number of minor errors written to the log file is 1000.
Using TurboIntegrator Variables
An information cube may be defined with two dimensions—application name and application measure. These two dimensions can be used to define a data point in a cube for specific applications to store and retrieve specific information, such as in the following example. Therefore, a TI process would read a file name to be loaded and the location of that file to be loaded:
datasourcePath
=CellGetS(NameOfInformationCubeName,MeasureNameForCategoryMeasureNameForFilePathName );
dataFileName 
= CellGetS( InformationCubeName, MeasureNameForCategory, MeasureNameForFileName) ;
Additionally, the process may also read a location to write exceptions or errors that may occur during the processing:
exceptionFilePath = CellGetS( systemVariableCubeName, systemVariableCategory , systemVariableLogsFileName ) ;
Then, we can build our exception variable:
exceptionFileName = exceptionFilePath | 'my process name' |_ Exceptions.txt';
Finally, we use some of the previously mentioned process variables to actually set up our data source (and in this example we are using an ASCII text file as a data source):
# — set the datasource info for this process
DatasourceNameForServer = datasourcePath | dataFileName;
DataSourceType = 'CHARACTERDELIMITED';
DatasourceASCIIDelimiter = ',';
Dynamic Definitions
Another advanced technique is to programmatically define a view in a TI process and then set that process’ data source to that view name. It is not that uncommon to define a View to zero-out cells in a cube to which the process will load incoming data, but it is little more interesting to define a view which will be read as input by the process itself.
Important TurboIntegrator functions
ExecuteProcess
It allows you to execute a TI process from within a TI process. The format is as follows:
ExecuteProcess(ProcessName, [ParamName1, ParamValue1, ParamName2, ParamValue2]);
The most important return values to know are as follows:
  • ProcessExitNormal(): If your ExcuteProcess returns this, it indicates that the process executed normally.
  • ProcessExitMinorError(): If your ExcuteProcess returns this, it indicates that the process executed successfully but encountered minor errors.
  • ProcessExitByQuit(): If your ExcuteProcess returns this, it indicates that the process exited because of an explicit quit command.
  • ProcessExitWithMessage(): If your ExcuteProcess returns this, it indicates that the process exited normally, with a message written to Tm1smsg.log.
  • ProcessExitSeriousError(): If your ExcuteProcess returns this, it indicates that the process exited because of a serious error.
ItemSkip
The ItemSkip function can be used to skip a record or row of data.
If (record_is_invalid);
ASCIIOutput (exceptionFile, recordId);
ItemSkip;
Endif;
ProcessBreak, ProcessError, and ProcessQuit
It is worth mentioning the following functions available in TI:
  • ProcessBreak
  • ProcessError
  • ProcessQuit
These processes are important because they can be used to:
  • Stop all processing and force control to the epilog
  • Terminate a process immediately
  • Terminate a process immediately with errors
 View handling
ViewZeroOut
This function sets all data points (all cells) in a named view to zero. It is most often used after defining a very specific view.

PublishView
This function is provided to publish a private view to the TM1 Server so that other TM1 Clients can see and use the view. This was not possible in early versions of TM1 unless you were a TM1 Administrator. The format of this function is:
PublishView(Cube, View, PublishPrivateSubsets, OverwriteExistingView);
The arguments (parameters) passed to the function are extremely important!
  • PublishPrivateSubets
  • OverWriteExistingView
CubeClearData
The importance of this function is simply that if you want to clear the entire cube, this function is extremely fast.
CellIsUpdateable
This is an important function. You can use this to avoid runtime TI generated errors but when using it before each cell insert the following lines:
If (CellsUpdateable(CubeName, DimName1, DimName2, DimName3=1);
CellPutN(myValue, CubeName, DimName1, DimName2, DimName3);
Else;
ASCIIOuput(ExcpetionFile, RecordID, ErrorMsg);
Endif;
SaveDataAll
This function saves all TM1 data from the server memory to disk and restarts the log file—IBM.
To avoid this problem, you can use the SaveDataAll function. However, it is important to use this function appropriately. This function used incorrectly can cause server locks and crashes.
SubsetGetSize
SubsetGetSize is a useful function that returns a count of the total elements that are in a given subset.
 Security functions
  • AddClient and DeleteClient: The AddClient function creates and the DeleteClient function deletes the clients on the TM1 Server. These functions must be used in the metadata section of a TI process.
  • AddGroup and DeleteGroup: The AddGroup function creates and the DeleteGroup function deletes the groups on the TM1 Server. These functions must be used in the metadata section of the TI process.
  • AssignClientToGroup and RemoveClientFromGroup: The AssignClientToGroup function will assign and the RemoveClientFromGroup will remove an existing client from an existing group.
  • ElementSecurityGet and ElementSecurityPut: The ElementSecurityGet function is used to assign and the ElementSecurityPut function is used to retrieve a security level for an existing group for a specific dimension element. The security levels can be None, Read, Write, Reserve, Lock, and Admin.
  • SecurityRefresh: The SecurityRefresh function reads all of the currently set up TM1 security information and applies that information to all of the TM1 objects on the TM1 Server. Be advised, depending on the complexity of the TM1 model and the security that has been set up, this function may take a very long time to execute and during this time all users are locked.
Rules and feeders Management functions
The following three functions can be used to maintain cube rules and feeders. They are important to be familiar with:
  • CubeProcessFeeders
  • DeleteAllPersistantFeeders
  • ForceSkipCheck
  • RuleLoadFromFile
CubeProcessFeeders
This function becomes important if you are using conditional feeders. Whenever you edit and save a cube rule file, feeders get automatically reprocessed by TM1. You can use this function to ensure that all of the conditional feeders are reprocessed. Keep in mind that all of the feeders for the cube will be reprocessed:
CubeProcessFeeders(CubeName);
DeleteAllPersistentFeeders
To improve performance you can define feeders as persistent. TM1 then saves these feeders into a .feeder file. These feeder files will persist (remain) until they are physically removed. You can use the DeleteAllPersistentFeeders function to clean up these files.
 ForceSkipCheck
This function can be placed in the prolog section of a TI process to force TM1 to perform as if the cube to which the process is querying has SkipCheck in its rules— meaning it will only see cells with values rather than every single cell.
RuleLoadFromFile
This function will load a cube’s rule file from a properly formatted text file. If you leave the text file argument empty, TI looks for a source file with the same name as the cube (but with a .rux extension) in the server’s data directory.
RuleLoadFromFile(Cube, TextFileName);
SubsetCreateByMDX
This function creates a subset based upon a properly formatted MDX expression.
MDX is a powerful way to create complicated lists. However, TM1 only supports a small number of MDX functions (not the complete MDX list). The format of this function is as follows:
SubsetCreatebyMDX(SubName, MDX_Expression);
ExecuteCommand
This is a function that you can use to execute a command line. The great thing about this is that you can do all sorts of clever things from within a TI process. The most useful is to execute an external script or MS Windows command file. The format of this function is as follows:
ExecuteCommand(CommandLine, Wait);
CommandLine is the command line you want to execute and the Wait parameter will be set to either 1 or 0 to indicate if the process should wait for the command to complete before proceeding.
Order of operations within a TurboIntegrator process
  • When you run a TI process, the procedures are executed in the following sequence (basically left to right as they are displayed):
  1. Prolog
  2. Metadata
  3. Data
  4. Epilog
  • Prolog is executed one time only.
  • If the data source for the process is None, Metadata and Data are disabled and do not execute.
  • If the data source for the process is None, Epilog executes one time immediately after Prolog finishes processing.
  • If the data source for the process is not None, Metadata and Data will execute for each record in the data source.
  • Code to build or modify a TM1 dimension resides in the Metadata
  • The Metadata tab is used to build TM1 subsets.
  • All lines of code in the metadata procedure are sequentially executed for each record in the data source.
  • All lines of code in the data procedure are sequentially executed for each record in the data source.
  • Because the data procedure is executed for each row or record in the data source, an error is generated in the procedure for multiple times.
  • The data procedure is the procedure used to write/edit code used to load data into a cube.
  • The data source is closed after the data procedure is completed.
  • The epilog procedure is always the last procedure to be executed.
  • Not all TM1 functions will work as expected in every procedure.
Aliases in TurboIntegrator functions
Let us suppose that a company does forecasting on a monthly basis. Each month a new version of the working forecast is created. For example, in January the forecast consists of 12 months of forecasted sales (January through December). In February, the working forecast consists of one month (January) of actual sales data and 11 months of forecasted sales (February through December). In March, the working forecast consists of two months (January and February) of actual sales and 10 months of forecasted sales—and so on. In this example, you can define an alias attribute on the version dimension and use it to refer to whatever version of the forecast is currently running (or is the working version).
Then, TI processes can refer to a version as the working forecast (the alias) and always connect to the correct version.
CubeSetLogChanges
In TI, you can turn on (or turn off) cube logging using the function CubeSetLogChanges. Be very careful to understand when and why to turn on (or off) cube logging. If data changes are to be made to a cube with your TI process and you need to ensure that you can recover those changes in the event of a server crash, you need to assure that logging is on. If it is not important to recover changes made by the TI process, set cube logging off to improve performance.
Revisiting variable types.
In the Contents column of the TurboIntegrator Variables tab you indicate what you want to do with the data in that variable. The options are:
  • Ignore: If you have a variable set to Ignore, TI will not be able to see or use it at all and if you refer to it in any part of your code, you will get an error message when you try to save the process. 
  • Element, Consolidation, Data, or Attribute: These are only used when you are having TI to generate the code via the GUI. This tells TI that the contents of this variable should be one of the following:
  1. An element name or consolidation name (which TI can use to update the relevant dimension)
  2. A piece of data, which TI can write into a cube
  3. An attribute, which is also used to update a dimension,To more visit:cognos tm1 training



Monday, March 30, 2020

Top iOS Interview Questions You Must Prepare In 2020

The Verge reports that, as of January 2018, there are multi-billion active Apple devices in the world. With the number of iOS users growing steadily across the world, the future looks bright for iOS app developers. Apple and iOS devices continue to have a loyal customer base, helped in part by innovative new devices such as Apple TV and Apple Watch. With that said, an iOS app development certification is probably the easiest way to hone and prove your skills in the same. There has never been a better time to become an iOS developer. If you are preparing to break into a career in iOS app development, look no further! We have created a list of top frequently-asked iOS interview questions that will help you ace your iOS job interview.
However, if you have already given an iOS interview, or have more questions, we encourage you to add them in the comments tab below. Our experts will answer them for you.To more info:ios online course


iOS Interview Questions

1. What is Cocoa and Cocoa Touch?

Cocoa vs Cocoa Touch

CocoaCocoa Touch
1. Application development environments for OS X1. Application development environments for iOS
2. Includes the Foundation and AppKit frameworks2. Includes Foundation and UIKit frameworks
3. Used to refer any class/object which is based on the Objective-C runtime & inherits from the root class3. Used to refer the application development using any programmatic interface



2. Which JSON framework is supported by iOS?
  • iOS supports SBJson framework.
  • SBJson is a JSON parser and generator for Objective-C.
  • It provides flexible APIs and additional control, making JSON handling easier.

3. What is the difference between atomic and nonatomic properties? Which is the default for synthesized properties?

Properties specified as atomic always return a fully initialized object. This also happens to be the default state for synthesized properties. But, if you have a property for which you know that retrieving an uninitialized value is not a risk (e.g. if all access to the property is already synchronized via other means), then setting it to nonatomic can give you better performance than atomic.

4. Differentiate ‘app ID’ from ‘bundle ID’. Explain why they are used.

An APP ID  is a two-part string used to identify one or more apps from a single development team. The string consists of a Team ID and a bundle ID search string, with a period (.) separating the two parts. The Team ID is supplied by Apple and is unique to a specific development team, while the bundle ID search string is supplied by the developer to match either the bundle ID of a single app or a set of bundle IDs for a group of apps.
The bundle ID defines each App and is specified in Xcode. A single Xcode project can have multiple targets and therefore output multiple apps. A common use case is an app that has both lite/free and pro/full versions or is branded multiple ways.

5. Which are the ways of achieving concurrency in iOS?

The three ways to achieve concurrency in iOS are:
  • Threads
  • Dispatch queues
  • Operation queues

6. Explain the different types of iOS Application States.

The different iOS application states are:
  • Not running state: when the app has not been launched or was running but was terminated by the system.
  • Inactive state: when the app is running in the foreground but is currently not receiving events. An app stays in this state briefly as it transitions to a different state. The only time it stays inactive is when the user locks the screen or the system prompts the user to respond to some event such as a phone call or SMS message.
  • Active state: when the app is running in the foreground and is receiving events. This is the normal mode for foreground apps.
  • Background state: when the app is in the background and executing code. Most apps enter this state briefly on their way to being suspended. However, an app that requests extra execution time can remain in this state for some time. Also, an app being launched directly into the background enters this state instead of the inactive state.
  • Suspended state: A suspended app remains in memory but does not execute any code. When a low-memory condition occurs, the system may purge suspended apps without notice to make more space for the foreground app.

15. What is SpriteKit and what is SceneKit?

SpriteKit is a framework for easy development of animated 2D objects.
SceneKit is a framework inherited from OS X that assists with 3D graphics rendering.
SpriteKit, SceneKit, and Metal are expected to power a new generation of mobile games that redefine what iOS devices’ powerful GPUs can offer.

16. What are iBeacons?

iBeacon.com defines iBeacon as Apple’s technology standard which allows Mobile Apps to listen for signals from beacons in the physical world and react accordingly. iBeacon technology allows Mobile Apps to understand their position on a micro-local scale, and deliver hyper-contextual content to users based on location. The underlying communication technology is Bluetooth Low Energy.

17. What is autorealease pool?

Every time -autorelease is sent to an object, it is added to the inner-most autorelease pool. When the pool is drained, it simply sends -release to all the objects in the pool.
Autorelease pools are a convenience that allows you to defer sending -release until “later”. That “later” can happen in several places, but the most common in Cocoa GUI apps is at the end of the current run loop cycle.

18. Differentiate between ‘assign’ and ‘retain’ keyword.

Retain -specifies that retain should be invoked on the object upon assignment. It takes ownership of an object.
Assign – specifies that the setter uses simple assignment. It is used on attribute of scalar type like float,int.

19. What are layer objects?

Layer objects are data objects which represent visual content and are used by views to render their content. Custom layer objects can also be added to the interface to implement complex animations and other types of sophisticated visual effects.

20. Outline the class hierarchy for a UIButton until NSObject.


UIButton inherits from UIControl, UIControl inherits from UIView, UIView inherits from UIResponder, UIResponder inherits from the root class NSObject.more info visit:
android online training hyderabad

Friday, March 27, 2020

The Evolution of SQL Server


SQL Server DBA Evolution:

In 1988, Microsoft released its first version of SQL Server. It was designed for the OS/2 platform andwas jointly developed by Microsoft and Sybase. During the early 1990s, Microsoft began to develop a new version of SQL Server for the NT platform. While it was under development, Microsoft decided that SQL Server should be tightly coupled with the NT operating system. In 1992, Microsoft assumed core responsibility for the future of SQL Server for NT. In 1993, Windows NT 3.1 and SQL Server 4.2 for NT were released. Microsoft's philosophy of combining a high-performance database with an easy-to-use interface proved to be very successful. Microsoft quickly became the second most popular vendor of high-end relational database software. In 1994, Microsoft and Sybase formally ended their partnership. In 1995, Microsoft released version 6.0 of SQL Server. This release was a major rewrite of SQL Server'score technology. Version 6.0 substantially improved performance, provided built-in replication, anddelivered centralized administration. In 1996, Microsoft released version 6.5 of SQL Server. This versionbrought significant enhancements to the existing technology and provided several new features,If you are intrested to learn SQL Server DBA please visit:learn ios app development




Whats New in Version 6.5
SQL Server version 6.5 is more than a maintenance release. It includes numerous features that further extend SQL Server. Following are several of the key features found in version 6.5:
Distributed Transaction Coordinator (DTC)
Replication to ODBC subscribers
Internet integration
Improved performance
Data warehousing extensions
Simplified administration
Distributed Transaction Coordinator (DTC)
The Distributed Transaction Coordinator (DTC)
controls transactions that span multiple SQL Server systems. This feature allows applications to update multiple databases in a distributed environment while providing transaction management. Through DTC, a data modification is guaranteed to run to completion or the modification is rolled back in its entirety. For example, if a modification updates data in two servers and the second server crashes during the update, the entire transaction is rolled back from both servers.
Replication to ODBC Subscribers
Version 6.5 has extended replication to database products other than SQL Server. Through Open Database Connectivity (ODBC), SQL Server can replicate changes to products such as Oracle, Sybase, IBM DB2, Access, and other database products. This feature offers administrators and developers a simplified and reliable method for distributing data.
Internet Integration
SQL Server provides direct Internet support through the SQL Web Assistant and Microsoft's Internet Information Server (IIS). The SQL Web Assistant is included with version 6.5; it generates HTML scripts for SQL Server data. This product allows you to create Web pages that contain SQL Server data.
SQL Server version 6.5 also provides direct support for Microsoft's IIS product, which means that complete Internet solutions can be delivered through the combination of SQL Server, NT, and IIS.
Improved Performance
SQL Server version 6.5 delivers improved performance over previous versions through enhancements such as reduced checkpoint serialization, faster sorting and indexing, and improved integration with the NT operating system. Version 6.5 also offers several new counters to help tune SQL Server for maximum performance.
Data Warehousing Extensions
SQL Server version 6.5 provides several data warehousing extensions and improved support for Very Large Databases (VLDB). These extensions include several new commands for online analytical processing (OLAP). Two of these new commands, CUBE and ROLLUP, allow a developer to create a single query that returns a recordset based on multiple dimensions and that contains aggregate information. Version 6.5 also provides improved VLDB support through single table backups/restorations and point-in-time recovery.
Simplified Administration
SQL Server version 6.5 continues to simplify database administration through improvements to the Enterprise Manager and through wizards. In version 6.5, the Enterprise Manager offers a customizable toolbar and menu system, an improved Transfer Manager, and other interface enhancements. Version 6.5 also includes a Database Maintenance wizard that automates common DBA tasks such as backups,
database consistency checks (DBCC), and index maintenance (such as UPDATE STATISTICS).
Features Common to Versions 6.5 and 6.0

Many of the features found in version 6.5 were originally released in version 6.0. Version 6.0 was a significant upgrade from the previous version of SQL Server (version 4.2x). Many of these changes were made in response to the complaint that version 4.2x was better suited to handle the needs of a department rather than an enterprise. Version 6.x meets the demanding requirements of an enterprise and also includes several other features that help differentiate it from its peers.
Enterprise Manager
The Enterprise Manager combines the functionality of version 4.2x's Object Manager and SQL Administrator into a single easy-to-use interface. From the Enterprise Manager, you can administer multiple servers, configure data replication, and develop databases.
NOTE: In addition to managing SQL Server 6.x, you can manage SQL Server 4.2x from
the Enterprise Manager. To do this, you must first run--from your 4.2x version of SQL
Server--the SQLOLE42.SQL script that ships with version 6.x.
Data Replication
Before SQL Server version 6.x, if you wanted replication, you had to buy a replication product or build your own replication services. Neither alternative was very appealing. Data replication products are expensive to purchase and building your own replication service can be complex and time consuming.
Fortunately, SQL Server 6.x provides a robust replication component that can meet the needs of an enterprise. The uses for replication are endless. Data warehousing, distributed processing, and end-user reporting are just a few examples of how SQL Server's data replication component can be used.
SQL Executive
SQL Executive helps automate many of the routine tasks a DBA must perform. Event scheduling, alert notification, replication management, and task management are some of the functions that SQL Executive provides.
NOTE: SQL Executive replaces version 4.2x's SQL Monitor.
OLE Automation
Distributed Management Objects (SQL-DMO) allow developers to tap into the power of SQL Server through the ease of OLE automation. Developers can use Visual Basic, Excel, and other products that support the VBA programming language to build custom administration scripts. These objects simplify the process of creating management scripts by allowing programs to interface with SQL Server through objects, methods, and properties.
Parallel Data Scanning and Read-Ahead Manager Through parallel data scanning and read-ahead algorithms, version 6.x has significantly improved SQL Server performance. Certain types of queries, such as table scans, execute 400 percent faster over version 4.2x.
Multithreaded Kernel
SQL Server version 6.x features a redesigned kernel that results in improved transaction performance andscalability. Previous versions of SQL Server were unable to effectively scale beyond two or three processors. Version 6.x is better suited to take advantage of multiple processors.
Optimizer Improvements
Version 6.x's optimizer has been significantly improved. The likelihood of a proper query execution plan has increased through better index usage and improved subquery support.Also new with version 6.x are optimizer hints. Now you can explicitly force the optimizer to choose an index. Before version 6.x, developers sometimes had to use nonstandard techniques to force theoptimizer to choose an appropriate index.
High-Performance Backup and Restoration
Version 6.x uses parallel optimization techniques to minimize backup and restoration times. These techniques allow Very Large Databases to be backed up and restored in a reasonable amount of time,To get more information please visit:SQl server DBA Ttraining
Very Large Database (VLDB) Support
Earlier versions of SQL Server had a practical size limitation of 50 to 60 gigabytes. Version 6.x can effectively support databases in excess of 100 gigabytes. SQL Server uses parallel optimization techniques to maximize performance. This enables SQL Server to post significant performance gains over previous versions.
Datatypes
The following three datatypes have been added to version 6.x:
Decimal
Numeric
Double-precision
Additionally, an identity property has been added. It is a value that is automatically incremented when a new record is inserted into a table. You can have only one identity column per table.
NOTE: Version 6.x is ANSI SQL 92-compliant.
Data Integrity
Several new data constraints have been added to version 6.x. These constraints relieve the developer from having to code declarative referential integrity (DRI). Constraints are defined with the CREATE TABLE and ALTER TABLE statements. See Table 3.1. for a comparison of data constraints.
Comparison of data constraints.

Version 6.x Earlier Versions
CHECK CREATE trigger or rule
DEFAULT CREATE default
FOREIGN KEY CREATE trigger, sp_foreignkey
PRIMARY KEY CREATE UNIQUE index, sp_primarykey
REFERENCE CREATE trigger
UNIQUE CREATE UNIQUE index
NOTE: In SQL Server 4.2x, the system procedures sp_primarykey and
sp_foreignkey were strictly for documenting primary keys and foreign keys. They do
not enforce data integrity and have been removed from version 6.x.
CHECK Constraint The CHECK constraint limits the range of data values a column can contain. The CHECK constraint can be created at the table or column level.
DEFAULT Constraint A DEFAULT constraint automatically enters a default value into the column when a value is not specified. The DEFAULT constraint can be created at the table or column level.
FOREIGN KEY Constraint The FOREIGN KEY constraint enforces foreign key relationships. It is used with the REFERENCE and PRIMARY KEY constraints.
PRIMARY KEY Constraint The PRIMARY KEY constraint uniquely identifies a primary key and enforces referential integrity. The column it references must contain unique data values and cannot be
NULL. It is used with the REFERENCE and FOREIGN KEY constraints.
REFERENCE Constraint The REFERENCE constraint is used to enforce referential integrity in conjunction with the PRIMARY KEY and FOREIGN KEY constraints.
UNIQUE Constraint The UNIQUE constraint prevents duplicate data values. This constraint is similar to the PRIMARY KEY constraint, except that it allows NULLs.
NOTE: Before version 6.x, referential integrity (RI) could be enforced only through the use of triggers. This meant that you had to build extensive code to enforce RI. With version 6.x, you can use the REFERENCE, PRIMARY KEY, and FOREIGN KEY constraints to enforce
RI. However, you must still use triggers to perform cascading updates and deletes.
Cursors
ANSI-SQL cursors and engine-based cursors are part of version 6.x. In previous versions of SQL Server, cursors could be created only by using DB-LIB or ODBC API calls. SQL Server's cursors are fully scrollable and permit data modifications. ANSI cursors (which are row oriented) are preferred to engine-based cursors (which are set oriented).
Summary
SQL Server 6.x offers significant improvements and enhancements over earlier versions.
To get more information please visit:ios onlline courses