How to access Mysql database in j2me application via netbeans ide

September 14, 2009 · 1 Comment
Filed under: Featured, NetBeans 

I have made an j2me application and need to access a mysql database. But in MIDlet class it doesn’t support for java.sql class. So how i can store data and retrieve them back within my MIDlet class?

Do i have to use web services to access Mysql database. If so how can i achieve that in Net beans.
Can Some one explain how to create a web service and connect it with the MIDlet and Where should i write sql queries.
I want steps by steps on how to do this…
So far i have used record store to store data. But what i need is to connect with a web service and deal with database.

how can i interact with j2ee web application via http connection. I meant what is the exact url location i should give? is it to the jsp page in web app. But ifthen how that jsp page automatically directed to the servlet? Can u give me just a example url code which carry at least a one parameter from mobile app to the other jsp via url.




Other thing is suppose i made a little mobile app in which we can type the url of a remote web site and then display it on the screen. What i did here was just type the url in a text box and via a separate thread it connects to the given page. But as i read the content of that page via a input stream, in mobile it displays the source code of that page. But i want to see the site in the display. How that can be achieved?

you cant create a direct database connection in j2me, like we do in j2se or j2ee….
instead of using web services api in j2me, i would like to recommend you to use simple http requests. web services api not supported by old mobile phones… create a small web application and handle the database in the web application.. i mean sql and all…
u can use http requests to send the parameters of the sql statement..
and the server can send data as the http response
you can send data to mobile as xml or just plain text..

Index Types

In addition to an index being clustered or nonclustered, it can be configured in other ways:
• Composite index: An index that contains more than one column. In both SQL Server 2005 and 2008, you can include up to 16 columns in an index, as long as the index doesn’t exceed the 900-byte limit. Both clustered and nonclustered indexes can be composite indexes.
• Unique Index: An index that ensures the uniqueness of each value in the indexed column. If the index is a composite, the uniqueness is enforced across the columns as a whole, not on the individual columns. For example, if you were to create an index on the FirstName and LastName columns in a table, the names together must be unique, but the individual names can be duplicated.
A unique index is automatically created when you define a primary key or unique constraint:

 Primary key: When you define a primary key constraint on one or more columns, SQL Server automatically creates a unique, clustered index if a clustered index does not already exist on the table or view. However, you can override the default behavior and define a unique, nonclustered index on the primary key.
 Unique: When you define a unique constraint, SQL Server automatically creates a unique, nonclustered index. You can specify that a unique clustered index be created if a clustered index does not already exist on the table.
• Covering index: A type of index that includes all the columns that are needed to process a particular query. For example, your query might retrieve the FirstName and LastName columns from a table, based on a value in the ContactID column. You can create a covering index that includes all three columns.
Index Design
As beneficial as indexes can be, they must be designed carefully. Because they can take up significant disk space, you don’t want to implement more indexes than necessary. In addition, indexes are automatically updated when the data rows themselves are updated, which can lead to additional overhead and can affect performance. As a result, index design should take into account a number of considerations.
Database
As mentioned above, indexes can enhance performance because they can provide a quick way for the query engine to find data. However, you must also take into account whether and how much you’re going to be inserting, updating, and deleting data. When you modify data, the indexes must also be modified to reflect the changed data, which can significantly affect performance. You should consider the following guidelines when planning your indexing strategy:
• For tables that are heavily updated, use as few columns as possible in the index, and don’t over-index the tables.
• If a table contains a lot of data but data modifications are low, use as many indexes as necessary to improve query performance. However, use indexes judiciously on small tables because the query engine might take longer to navigate the index than to perform a table scan.
• For clustered indexes, try to keep the length of the indexed columns as short as possible. Ideally, try to implement your clustered indexes on unique columns that do not permit null values. This is why the primary key is often used for the table’s clustered index, although query considerations should also be taken into account when determining which columns should participate in the clustered index.
• The uniqueness of values in a column affects index performance. In general, the more duplicate values you have in a column, the more poorly the index performs. On the other hand, the more unique each value, the better the performance. When possible, implement unique indexes.
• For composite indexes, take into consideration the order of the columns in the index definition. Columns that will be used in comparison expressions in the WHERE clause (such as WHERE FirstName = ‘Charlie’) should be listed first. Subsequent columns should be listed based on the uniqueness of their values, with the most unique listed first.
• You can also index computed columns if they meet certain requirements. For example, the expression used to generate the values must be deterministic (which means it always returns the same result for a specified set of inputs). For more details about indexing computed columns, see the topic “Creating Indexes on Computed Columns” in SQL Server Books Online.
Queries
Another consideration when setting up indexes is how the database will be queried. As mentioned above, you must take into account the frequency of data modifications. In addition, you should consider the following guidelines:
• Try to insert or modify as many rows as possible in a single statement, rather than using multiple queries.
• Create nonclustered indexes on columns used frequently in your statement’s predicates and join conditions.
• Consider indexing columns used in exact-match queries.

Understanding MySQL Query Cache for PHP Developers

June 22, 2009 · Leave a Comment
Filed under: Featured, General, JAVA, PHP 

Problem Statement:
Many PHP developers using MySQL have unclear understanding of the MySQL query cache. So we decided to write a series of introductory articles to get everyone on the same page. This article is the first installment of the series and here we will introduce the basics of query cache in MySQL. Note that unlike a typical book chapter, this article will be of low-fat flavor — less theory and more actionables — of an introduction to query caching for MySQL.
What is a MySQL query cache?
It turns out that MySQL has a built-in query cache that can cache a specific type of queries — SELECT statements — to speed up delivery of the result sets. The cache can increase performance for many instances but can also hurt performance if not used wisely.
What can be cached in the MySQL query cache?
Only SELECT statements can be cached. This does not include prepared SELECT statements. Query caching only works for SELECT statements that are fully qualified and returns same result every time. This means you cannot use non deterministic functions that return data depending on situation. For example:
// Following SELECT query can be cached
$stmt = “SELECT * FROM user WHERE active = 1″;

// Following SELECT query cannot be cached
$stmt = “SELECT * FROM user where signup_date >= NOW()”;

// Following SELECT query cannot be cached
$stmt = “SELECT count(*) FROM user”;
Here are the requirements a query must meet to take advantage of the query cache:
• Only exact queries are serviced from the cache — must match the stored query in exact detail.
• Queries with placeholders — such as the ones for prepared statements — are not cached in query cache
• Queries with user defined functions or non-deterministic functions cannot be cached
• Any table changes (such as issuing of an ALTER statement) will remove the queries from the cache for that table
Introduction to query cache parameters
The more you understand the query caching parameters, the better you are going to be at tuning the query cache to your advantage. First find out what are the global query caching parameters that you can fiddle with using the following query at the mysql command-line prompt.
mysql> show global variables like ‘%query_cache%';
A sample output is shown below:
+——————————+———–+
| Variable_name | Value |
+——————————+———–+
| have_query_cache | YES |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 536870912 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+——————————+———–+
6 rows in set (0.00 sec)
The purpose of these parameters are described briefly as:
• have_query_cache – size of query cache in bytes
• query_cache_limit – the maximum size of result set (default: 1048576 bytes or 1 MB). If your query returns result set that is greater than the limit set here, it will NOT BE CACHED
• query_cache_min_res_unit – the smallest block size allocated by query cache. Default is 4KB
• query_cache_size – the total memory available to query cache
• query_cache_type – when set to ON or 1, query caching is on for all applicable queries, when set to OFF (0) query caching is turned off and when set to DEMAND or 2, caching is on for queries with SQL_CACHE directive in the query
• query_cache_wlock_invalidate-causes the query cache to invalidate any query in the cache if a write lock is executed against the table(s) it uses
Whats your query cache status right now?
To find out whats going on with your query cache, run the following command from the MySQL command-line prompt:
mysql> show status like ‘%qc%';
Here is a sample result:
+————————-+———–+
| Variable_name | Value |
+————————-+———–+
| Qcache_free_blocks | 978 |
| Qcache_free_memory | 527371984 |
| Qcache_hits | 645545 |
| Qcache_inserts | 130796 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 417579 |
| Qcache_queries_in_cache | 4973 |
| Qcache_total_blocks | 11167 |
+————————-+———–+
8 rows in set (0.00 sec)
Here are some brief explanations of these status metrics:
• Qcache_free_blocks – number of memory blocks marked as free, which indicates memory fragmentation
• Qcache_free_memory – total amount of memory free for query cache
• Qcache_hits – number of times query result was found in the query cache
• Qcache_inserts – number of times queries were written to the query cache
• Qcache_not_cached – number of queries removed from cache due to low cache memory
• Qcache_queries_in_cache – number of queries that could not be cached
• Qcache_total_block – total number of blocsk in query cache
Calculating query cache hits vs misses
Here is the formula for calculating hit ratio for query cache:
$totalSelectQueryCount = $comSelect + $qcacheHits
$percentHits = ($qcacheHits * 100)/ $totalSelectQueryCount
What the above formula does is adds up all the SELECT queries in the system using two MySQL global variables: com_select and qcache_hits.
To set $comSelect, run show global status like ‘%com_select%’ query. For example:
mysql> show global status like ‘%com_select%';
+—————+———+
| Variable_name | Value |
+—————+———+
| Com_select | 1739663 |
+—————+———+
To set $qcacheHits, run show status like ‘%qcache_hit%’. For example:
mysql> show status like ‘%qcache_hit%';
+—————+———-+
| Variable_name | Value |
+—————+———-+
| Qcache_hits | 20786961 |
+—————+———-+
With the above sample number, the percent hit is 92.28% which is great.
Managing query cache
To manipulate your query cache, you can use the following MySQL statements from the MySQL command-line:
To remove all the queries from your query cache, run:
RESET QUERY CACHE;
To defragment the query cache memory, run:
FLUSH QUERY CACHE;

Configuring Apache2.2 mod_proxy_ajp with Tomcat

June 4, 2009 · Leave a Comment
Filed under: Featured, JSP, Linux, MySql, MySQL 5.1 

mod_proxy_ajp is an Apache module which can be used to forward a client HTTP request to an internal Tomcat application server using the AJP protocol.  In this example I have used JSPWki as the example application running on a Windows server with Apache2.2 and Tomcat 5.5

Advantages of mod_proxy_ajp rather:

  • You can gain a lot of flexibility (lot of the apache modules/features can be used especially “name-based virtual hosting”)
  • Practical for those who need to support Java applications along with PHP / Perl … (only one apache server is needed)
  • Certificates management is easier in apache configuration (this argument is a lot subjective)
  • It’s not Tomcat’s main objective to serve http static resources (not optimized for that)
  • Load balancing/cluster management is easier with an apache frontend

Tomcat configuration

We just have to create the AJP connector in the conf/server.xml file like that:

<Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />

This line will enable AJP connections to the 8009 port of your tomcat server (localhost for example).

Apache2 configuration

One way (useful if this apache is a global front end) is to create a virtual host for this application.

  • ProxyPass and ProxyPassReverse are classic reverse proxy directives used to forward the stream to another location.
  • ajp://… is the AJP connector location (your tomcat’s server host/port)

# JSP Wiki DNS

<VirtualHost 10.1.1.170:80>

ServerName wiki.example.com

ServerAlias wiki

DocumentRoot “D:/Tomcat 5.5/webapps/JSPWiki/”

#DocumentRoot “D:/Apache2.2/htdocs/JSPWiki”

<Proxy *>

AddDefaultCharset Off

Order deny,allow

Allow from all

</Proxy>

ProxyPass / ajp://localhost:8009/

ProxyPassReverse / ajp://localhost:8009/

</VirtualHost>

The following entries were made in server.xml on the Tomcat App Server :

<Host name=”wiki.example.com” debug=”0″

appBase=”D:/Tomcat 5.5/webapps”

unpackWARs=”true” autoDeploy=”true”>

<Alias>wiki</Alias>

<Context path= “” docBase=”D:/Tomcat 5.5/webapps/JSPWiki” debug=”1″/>

<Valve className=”org.apache.catalina.valves.AccessLogValve”

directory=”logs” prefix=”home_access_log.” suffix=”.txt”

pattern=”common” resolveHosts=”false”/>

</Host>

Under JSPWiki/WEB-INF/jspwiki.properties the Base URL was modified as follows :

jspwiki.baseURL=http://wiki/JSPWiki/

A Diagram of the MySQL information schema

May 31, 2009 · Leave a Comment
Filed under: Featured, General, MySql, MySql 5.0, MySQL 5.1 

This page contains a clickable diagram of the MySQL data dictionary implementation, the information schema database. The diagram is clickable to a MAP, so you can click on a table to link through to the relevant MySQL reference page:


About the Diagram

The diagram was initially created in Microsoft Visio by reverse-engineering the information_schema database via the MyODBC driver. Afterwards, relationships were added by hand. You can check out the history of changes here. If you notice any mistakes, or omissions, or if you have a question, just send me an email at my hotmail adres (R_P_Bouman at hotmail dot com).

For more info on those cool MySQL features, such as views, stored procedures/functions and triggers, check out http://www.mysqldevelopment.com/

You should realise that this is not a fysical data model of the implementation. Rather, it is a conceptual model of the structure of the information schema database As such, it can be used as a guide when building queries against the information schema.

Please, Redistribute!

You can download the original Visio 2002 file here. You’ll probably need the Entity Relationship stencil shipped with your Visio product. The diagram is also available as a gif image.

You are free to copy or redistribute the diagram in any form. This includes saving and redistributing the diagram in another format (pdf, jpeg, whatever). However, I would appreciate it if you would not modify the contents of the model without notifying me. Also, I would appreciate it if you do not remove the footer identifying me as the author of the diagram.

Notation Conventions

The diagram is based on the Visio stencil for Entity Relationship modelling. Entities (Tables) are drawn as rectangles, relationships are drawn as lines connecting entities. Cardinalities are drawn using the usual “crowfeet” markers.

Some general notes on the notation provided by Visio:

  • Columns shown in Bold Type Face are not nullable; that is, a value is mandatory for these columns
  • The Primary Key Columns appear in a separate box right beneath the table name
  • Non-identifying relationships are drawn using a dashed connection line
  • (Partially) Identifying relationships are drawn using a solid connection line
  • in the left margin right before the column names, one or more comma-separated markers may appear indicating that the column references a parent column as part of a relationship definition

Colors

I used background colors to mark functionally related tables. Depending upon your criteria, you could recognize other functionally related units. I chose these colors:

  • dark blue for the schema (database) related tables, SCHEMATA
  • light blue for the character set and collation related tables
  • red for the relational core tables
  • kaki for the STATISTICS table, which holds information concerning indexes
  • yellow for the privilege related tables
  • green for the constraint related tables
  • magenta for the ROUTINES and TRIGGERS tables

Rolenames

I added rolenames for the relationships (29-06-2005). You can still download the previous version of the Visio 2002 file without rolenames. The gif image of this previous version is also available. Except for the rolenames, the previous version has is type set in a 10pt font, whereas the new version is set in an 8pt font. Just Take you pick.

Right after the rolename, you can see an integer between left and right parenthesis. The integer between the parenthesis corresponds to the foreign key markers (FK1, FK2, etcetera) appearing right before some column names. To see how this works, look at the relationship between KEY_COLUMN_USAGE and TABLE_CONSTRAINTS. The rolename is “defined by”, and it is followed by a “(1)”. The “1” between the parenthesis corresponds to the KEY_COLUMN_USAGE columns CONSTRAINT_SCHEMA and CONSTRAINT_NAME. Right before these columns you can see the “FK1″ markers, where the “1” in “FK1″ indicates correspondence to the “1” between the relationship parenthesis.

The diagram does not show the details of the entire relationship mapping. For instance, looking at the diagram does not reveal that the CONSTRAINT_SCHEMA column in KEY_COLUMN_USAGE refers to the CONSTRAINT_SCHEMA column in TABLE_CONSTRAINTS. You can see that it must map to either the CONSTRAINT_SCHEMA or the CONSTRAINT_NAME column, because the pair of them form an identifier for TABLE_CONSTRAINTS. However, by looking at the column names it’s not too hard to guess that KEY_COLUMN_USAGE.CONSTRAINT_SCHEMA will map to TABLE_CONSTRAINTS.CONSTRAINT_SCHEMA , and that the KEY_COLUMN_USAGE.CONSTRAINT_NAME will map to TABLE_CONSTRAINTS.CONSTRAINT_NAME.

To avoid cluttering of the diagram, only one rolename is displayed. I chose to display the rolename that applies to the child end (usually the “many” side) of the relationship. Visio refers to this as the “Inverse Text” of the “Verb phrase”. This type of rolename expresses how the child entity relates to the parent entity. In most cases, only one such rolename is enough to think of a meaningful phrase for the other rolename. My choice for naming the child end instead of the parent end is arbitrary. I am willing to change this if you provide suggestions to do so.

Using the entity names and the rolename, the semantics of a relationship can be expressed in a simple english sentence:

  1. Take the singular form of the name of a child entity
  2. Append the rolename as it appears in the diagram
  3. Append the singular form of the name of the corresponding parent entity

By this convention, the relationship between SCHEMATA and CHARACTER_SETS would read:

  1. “SCHEMA”
  2. “has default”
  3. “CHARACTER_SET”

For a more detailed understanding of the relationship, expand this pattern by specifying the cardinalities. Cardinalities specify in what quantities instances at either side of a relationship relate to each other. Sticking to our previous example, the sentence would read:
A Schema has exactly one default Characterset.
Another example, for the relationship between COLUMNS and CHARACTER_SETS:
A Column optionally draws (characters) from at most one Characterset.

I tried to choose the rolenames as descriptive as possible, but any suggestions are very welcome.

Modelling choice 1: The CONSTRAINT_SCHEMA and INDEX_SCHEMA columns vs the TRIGGER_SCHEMA column

In MySQL, indexes as well as constraints reside in same schema as the table on which they are defined. The same holds for triggers: a trigger always resides in the same schema as that of it’s associated table. There’s a difference too: indexes and constraints are uniquely identified by their respective name (INDEX_NAME and CONSTRAINT_NAME) within their associated table. Triggers are uniquely identified by their name within their schema (which is always the same schema as that of their associated table).

This affects the diagram as far as the columns CONSTRAINT_SCHEMA (tables: TABLE_CONSTRAINTS and KEY_COLUMN_USAGE) and INDEX_SCHEMA (table: STATISTICS) are concerned. Because we know that the value for these columns is identical to the value in the TABLE_SCHEMA column, we can get away with not including these columns in the identifiers of their tables. Instead, we can use the TABLE_SCHEMA column. Also, there’s no need to include separate relationships between SCHEMATA and TABLE_CONSTRAINTS and SCHEMATA and STATISTICS. The former is already determined because relationships exist between SCHEMATA and TABLES and TABLES and TABLE_CONSTRAINTS. The latter is already determined because relationships exist between SCHEMATA and TABLES, TABLES and COLUMNS, and finally, COLUMNS and STATISTICS.

So, triggers are in a separate namespace: triggers can be uniquely identified within a schema by using their name (TRIGGER_NAME). Although a trigger is always associated to a table much in the same way as a table constraint or a index is, the triggers table does have it’s ‘own’ identifier, which is made up of TRIGGER_SCHEMA and TRIGGER_NAME, although the value of the TRIGGER_SCHEMA and EVENT_OBJECT_SCHEMA columns are in fact always equal.

Modelling choice 2: The CATALOG columns

I chose to exclude the %CATALOG columns from the primary keys and relationship definitions. MySQL does not support catalogs, which is fine according to ISO 9075. However, as far as I can see, the MySQL implementation is not entirely consistent with ISO 9075. It’s not a big thing though, considering the fact that catalogs are not supported. You can safely skip the next few paragraphs if you’re not interested in features that are not, and most probably will not be supported (at least in the near future).

The CATALOG column is usually present as expected, that is, in those places where an object needs to be identified within a catalog (that is, if MySQL would support catalogs!). When a CATALOG column is present it always evaluates to the “NULL Value” because MySQL does not support catalogs, which is ISO 9075 compliant. There are some cases where one could expect a CATALOG column, but were it is ommitted without raising any questions: In COLLATIONS (COLLATION_CATALOG), CHARACTER_SETS (CHARACTER_SET_CATALOG), and in those places where a COLLATION or CHARACTER_SET could be referenced. Note that a fully compliant ISO 9075 implementation would have “NULL” CATALOG columns in these cases though (as well as a %_SCHEMA column!).

There are some other cases where the MySQL implementation omits CATALOG columns, and where it does raise questions. Consider the KEY_COLUMN_USAGE table. We can see a CONSTRAINT_CATALOG and TABLE_CATALOG column. The former would serve to reference (and thus identify) a table constraint (along with CONSTRAINT_SCHEMA and CONSTRAINT_NAME). The latter would serve to reference (and thus identify) a table column (along with TABLE_SCHEMA and TABLE_NAME and COLUMN_NAME).

So far, so good, but here we go: The KEY_COLUMN_USAGE table seems to be able to reference another table column via the REFERENCED_TABLE_SCHEMA, REFERENCED_TABLE_NAME and REFERENCED_COLUMN_NAME columns. This is used to define the mapping between a foreign key column and the corresponding primary key or unique constraint column. I would expect to see a REFERENCED_TABLE_CATALOG column for the sake of consistency. A total lack of CATALOG columns would do equally well in this respect.

But, okay. I’m done now. It’s not really important, considering that the REFERENCED% columns are not even standard in KEY_COLUMN_USAGE. I feel that the current lack of support for the CATALOG concept justifies omitting these columns from the primary key definitions and relationships in the diagram. In fact, I feel that the omission improves the readability of the diagram.

Why Lower case table names are not possible in windows for MySql

May 24, 2009 · Leave a Comment
Filed under: Featured, General, MySql, MySql 5.0, MySQL 5.1 

lower_case_table_names

If set to 1, table names are stored in lowercase on disk and table name comparisons are not case sensitive. If set to 2 table names are stored as given but compared in lowercase. This option also applies to database names and table aliases. See Section Identifier Case

If you are using InnoDB tables, you should set this variable to 1 on all platforms to force names to be converted to lowercase.

You should not set this variable to 0 if you are running MySQL on a system that does not have case-sensitive filenames (such as Windows or Mac OS X). If this variable is not set at startup and the filesystem on which the data directory is located does not have case-sensitive filenames, MySQL automatically sets lower_case_table_names to 2.

get last increment id in MySql

May 1, 2009 · Leave a Comment
Filed under: Featured, MySql, MySql 5.0, MySQL 5.1 

In Mysql we can get last increment id or inserted Id by using the following query

 

SELECT LAST_INSERT_ID()

 

Oracle-like ROWNUM in MySQL

March 28, 2009 · Leave a Comment
Filed under: Featured, MySql, MySQL 5.1, Oracle 

 

It needs sometimes to exactly mimic Oracle’s ROWNUM where is no possibility to initiate a counter in previous statement by SET @rownum:=0;.

It is still possible in a single SQL.

SELECT @rownum:=@rownum+1 rownum, t.*FROM (SELECT @rownum:=0) r, mytable t;

 

Stored Procedures are EVIL

 

Stored Procedures are EVIL

By Tony Marston

3rd September 2006

A lot of developers are taught to use database stored procedures, triggers and database constraints at every possible opportunity, and they cannot understand why an old dinosaur like me should choose to take an opposite view. The reason can be summed up quite simply:

You only know what you have been taught, whereas I know what I have learned.

I was weaned on file systems and databases which did not have any facilities for stored procedures and triggers, so I learned how to build applications without them. When such facilities became available my colleagues and I still never used them for practical reasons:

  • It meant learning a new language, and we didn’t have the time.
  • It meant taking longer to implement and maintain, therefore cost more to develop. This is an important consideration for a software house which can only win business by providing cost-effective solutions.
  • There was no advantage in doing so, so why bother?

Our golden rule was:

Use stored procedures and triggers only when it is an absolutely necessity.

This is in total conflict with the attitude of today’s wet-behind-the-ears tenderfoot greenhorn who seems to think:

Use stored procedures and triggers at every possible opportunity simply because you can.


Amongst the arguments in favour of stored procedures are:

Stored procedures are not as brittle as dynamic SQL

Some people argue that putting ad-hoc SQL in your business layer (BL) code is not that good. Agreed, but who said that the only alternative is stored procedures? Why not have a DAL that generates the SQL query at runtime based on information passed to it by the BL? It is correct to say that small changes to the database can have severe impacts on the application. However, changes to a relational model will always have an impact on the application that targets that model: add a non-nullable column to a table and you will see what I mean. You can use stored procedures or ad-hoc queries, you have to change the calling code to make sure that column gets a value when a new row is inserted. For Ad-hoc queries, you change the query, and you’re set. For stored procedures, you have to change the signature of the stored procedure, since the INSERT/UPDATE procs have to receive a value for the new column. This can break other code targeting the stored procedure as well, which is a severe maintenance issue. A component which generates the SQL on the fly at runtime doesn’t suffer from this: it will for example receive an entity which has to be saved to the database, that entity contains the new field, the SQL is generated and the entity is saved. No maintenance problems. With a stored procedure this wouldn’t be possible.

Stored procedures are more secure

This is a common argument that many people echo without realising that it became defunct when role-based security was made available. A good DBA defines user-roles in the database, and users are added to those roles and rights are defined per role, not per user. This way, it is easy to control which users can insert / update and which users can for example select or delete or have access to views in an easy way.

With a view it is possible to control which data is accessed on a column basis or row basis. This means that if you want user U to select only 2 or so columns from a table, you can give that user access to a view, not the underlying table. The same goes for rows in one or more tables. Create a view which shows those rows, filtering out others. Give access rights to the view, not the table, obviously using user-roles. This way you can limit access to sensitive data without having to compromise your programming model because you have to move to stored procedures.

It is also said that stored procedures are more secure because they prevent SQL injection attacks. This argument is false for the simple reason that it is possible to have a stored procedure which concatenates strings together and therefore open itself up to sql injection attacks (generally seen in systems which use procedures and have to offer some sort of general search routine), while the use of parameterized queries removes this vulnerability as no value can end up as being part of the actually query string.

Stored procedures are more efficient

The execution of SQL statements in stored procedures may have been faster than with dynamic SQL in the early days of database systems, but that advantage has all but disappeared in the current versions. In some cases a stored procedure may even be slower than dynamic SQL, so this argument is as dead as a Dodo.

Performance should not be the first question. My belief is that most of the time you should focus on writing maintainable code. Then use a profiler to identify hot spots and then replace only those hot spots with faster but less clear code. The main reason to do this is because in most systems only a very small proportion of the code is actually performance critical, and it’s much easier to improve the performance of well factored maintainable code.

While stored procedures may run faster, they take longer to build, test, debug and maintain, therefore this extra speed comes at a price. If the same function can be performed inside application code at an acceptable speed, what is the advantage of spending more money to make it run faster at a more-than-acceptable speed? It is OK to use stored procedures when you absolutely need a performance gain, but until then they’re nothing but premature optimization.

The company has paid for them, so why not use them?

A similar argument is that by not using what the company has paid for, you are effectively wasting the company’s money. I’m sorry, but using something because it’s there is just not good enough. If I can achieve something inside my application with application code, then I must be given a very good reason to move it out of my application and into the database. Believe it or not there are costs involved in moving logic from one place to another, and those costs must be offset by measurable benefits.

Application code or database code – it’s still code, isn’t it?

No it’s not. Application code is built using a programming language whereas SQL is nothing more than a data manipulation language, and is therefore very limited in its scope. There is absolutely nothing that can be done in a stored procedure that cannot also be done in application code, but the converse is not true.


Amongst the arguments against stored procedures are:

It mangles the 3 Tier structure

Instead of having a structure which separates concerns in a tried and trusted way – GUI, business logic and storage – you now have logic intermingling with storage, and logic on multiple tiers within the architecture. This causes potential headaches down the road if that logic has to change.

Stored procedures are a maintenance problem

The reason for this is that stored procedures form an API by themselves. Changing an API is not that good, it will break a lot of code in some situations. Adding new functionality or new procedures is the "best" way to extend an existing API. A set of stored procedures is no different. This means that when a table changes, or behaviour of a stored procedure changes and it requires a new parameter, a new stored procedure has to be added. This might sound l

ike a minor problem but it isn’t, especially when your system is already large and has run for some time. Every system developed runs the risk of becoming a legacy system that has to be maintained for several years. This takes a lot of time, because the communication between the developer(s) who maintain/write the stored procedures and the developer(s) who write the DAL/BL code has to be intense: a new stored procedure will be saved fine, however it will not be called correctly until the DAL code is altered. When you have Dynamic SQL in your BL at your hands, it’s not a problem. You change the code there, create a different filter, whatever you like and whatever fits the functionality to implement.

Microsoft also believes stored procedures are over: it’s next generation business framework MBF is based on Objectspaces, which generates SQL on the fly.

Stored procedures take longer to test

Business logic in stored procedures is more work to test than the corresponding logic in the application. Referential integrity will often force you to setup a lot of other data just to be able to insert the data you need for a test (unless you’re working in a legacy database without any foreign key constraints). Stored procedures are inherently procedural in nature, and hence harder to create isolated tests and prone to code duplication. Another consideration, and this matters a great deal in a sizable application, is that any automated test that hits the database is slower than a test that runs inside of the application. Slow tests lead to longer feedback cycles.

BL in stored procedures does not scale

If all the business logic is held in the database instead of the application then the database becomes the bottleneck. Once the load starts increasing the performance starts dropping. With business logic in the application it is easy to scale up simply by adding another processor or two, but that option is not readily available if all that logic is held in the database.

If you have a system with 100’s of distributed databases it is far more difficult to keep all those stored procedures and triggers synchronized than it is to keep the application code synchronized.

Stored procedures are not customisable

This is a big issue if you want an application where the customer can insert their own business logic, or where different logic is required by different customers. Achieving this with application code is a piece of cake, but with database logic it is a can of worms.

Database triggers are hidden from the application

A big problem with database triggers is that the application does not know that they exist, therefore does not know whether they have run or not. This became a serious issue in one application (not written by me) which I was maintaining. A new DBA who was not aware of the existence of all these triggers did something which deactivated every trigger on the main database. The triggers were still there, they had not been deleted, but they had been turned off so did not fire and do what they were supposed to do. This mistake took several hours to spot and several days to fix.

Version Control

It is easy to control all changes to application code by running it through a proper version control system, but those facilities do not exist for stored procedures and triggers. How much damage could be caused if a stored procedure were to get out of sync with the application code? How easy is it to check that the application is running with the correct versions? How much more difficult would it be if the application you were supporting was running on a remote site with nothing more than a dial-up connection?

This is a reason why some teams avoid stored procedures like the plague – it eliminates an area of potentially disastrous screw-ups.

Vendor lock-in

You may think that this is not a problem if you build and maintain the applications for a single company where a change in database vendor is highly unlikely, but what happens should the company decide that their DBMS is no longer flavour of the month and they want to change to a different DBMS? This may be due to several factors, such as spiraling costs or poor performance, but when it happens you will find that a lot of code will have to be rewritten. Porting the data will be one exercise, but porting the stored procedures and triggers will be something else entirely. Now, if all that logic were held inside the application, how much simpler would it be?

Believe it or not there are people out there who write applications which are database-independent for the simple reason that the applications may be used by many different companies, and those many companies may not all use the same DBMS. Those that do use the same DBMS may not be using the same version, and stored procedures written for one version may not be compatible with another.


As far as I am concerned the use of stored procedures, database triggers and foreign key restraints is OPTIONAL, not MANDATORY, therefore I am free to exercise my option not to use them. That is my choice, and the software that I produce does not suffer in any way, therefore it cannot be defined as the wrong choice.

The web application framework that I have built using PHP does not use stored procedures, database triggers or foreign key constraints, yet it does not suffer from any lack of functionality. This is possible simply because I can do everything I want inside my application where it is instantly accessible and customisable. To those of you who instantly jump to the (wrong) conclusion that this must mean that I have to write a huge amount of duplicated SQL statements my answer is simple – I don’t write any SQL statements at all, they are all generated dynamically at runtime. This is all due to the framework being built using the 3 Tier Architecture which has a clear separation of concerns:

  • There is a separate object in the Business Layer for each database table. This is where all business rules are applied as data passes from the Presentation Layer (UI), through the Business Layer to the Data Access Layer, and back again. The Business Layer does not have any direct communication with the database – this is all handled by the Data Access Layer.
  • There is a single object in the Data Access Layer known as the Data Access Object (DAO). The DAO receives a request from the Business Layer and dynamically constructs and executes the SQL query string to satisfy that request. This implementation means that I can easily switch to another DBMS simply by switching to another DAO, and without having to change a single line of code in any Business Layer object.
  • Referential integrity is also handled by standard code within the framework and requires no additional coding from any developer whatsoever. It uses information which is exported from the Data Dictionary which tells it what to do with every relationship, and the standard code in the framework simply performs the relevant processing. The advantage of this approach is that it is easy to amend or even turn off any of these rules at runtime, which makes the application infinitely more flexible.
  • All changes made to the database can be logged without using a single database trigger. How? By adding extra code into the DAO to write all relevant details out to the AUDIT database. This functionality is totally transparent to all the objects in the Business Layer, and they do not need any extra code to make it work

    .


References


© Tony Marston
3rd September 2006

http://www.tonymarston.net
http://www.radicore.org

counter

 

What’s New in MySQL 5.1

March 3, 2009 · Leave a Comment
Filed under: Featured, MySQL 5.1 

 

What’s New in MySQL 5.1
Partitioning, Event scheduler are two big features that are introduced in this version.
and a major of sql optimization has been done.
MySQL 5.1 performs much better in all tests: it can handle more transactions per second and it does not deadlock with 256 threads, unlike 5.0.

The following features have been added to MySQL 5.1.

*

Partitioning. This capability enables distributing portions of individual tables across a file system, according to rules which can be set when the table is created. In effect, different portions of a table are stored as separate tables in different locations, but from the user point of view, the partitioned table is still a single table. Syntactically, this implements a number of new extensions to the CREATE TABLE, ALTER TABLE, and EXPLAIN … SELECT statements. As of MySQL 5.1.6, queries against partitioned tables can take advantage of partition pruning. In some cases, this can result in query execution that is an order of magnitude faster than the same query against a non-partitioned version of the same table. See Chapter 18, Partitioning, for further information on this functionality. (Author: Mikael Ronström)
*

Row-based replication. Replication capabilities in MySQL originally were based on propagation of SQL statements from master to slave. This is called statement-based replication. As of MySQL 5.1.5, another basis for replication is available. This is called row-based replication. Instead of sending SQL statements to the slave, the master writes events to its binary log that indicate how individual table rows are effected. As of MySQL 5.1.8, a third option is available: mixed. This will use statement-based replication by default, and only switch to row-based replication in particular cases. See Section 16.1.2, “Replication Formats”. (Authors: Lars Thalmann, Guilhem Bichot, Mats Kindahl)
*

Plugin API. MySQL 5.1 adds support for a very flexible plugin API that enables loading and unloading of various components at runtime, without restarting the server. Although the work on this is not finished yet, plugin full-text parsers are a first step in this direction. This allows users to implement their own input filter on the indexed text, enabling full-text search capability on arbitrary data such as PDF files or other document formats. A pre-parser full-text plugin performs the actual parsing and extraction of the text and hands it over to the built-in MySQL full-text search. See Section 22.2, “The MySQL Plugin Interface”. (Author: Sergey Vojtovich)
*

Event scheduler. MySQL Events are tasks that run according to a schedule. When you create an event, you are creating a named database object containing one or more SQL statements to be executed at one or more regular intervals, beginning and ending at a specific date and time. Conceptually, this is similar to the idea of the Unix crontab (also known as a “cron job”) or the Windows Task Scheduler. See Section 19.4, “Using the Event Scheduler”. (Author: Andrey Hristov)
*

Server log tables. Before MySQL 5.1, the server writes general query log and slow query log entries to log files. As of MySQL 5.1, the server’s logging capabilities for these logs are more flexible. Log entries can be written to log files (as before) or to the general_log and slow_log tables in the mysql database. If logging is enabled, either or both destinations can be selected. The –log-output option controls the destination or destinations of log output. See Section 5.2.1, “Selecting General Query and Slow Query Log Output Destinations”. (Author: Petr Chardin)
*

Upgrade program. The mysql_upgrade program (available as of MySQL 5.1.7) checks all existing tables for incompatibilities with the current version of MySQL Server and repairs them if necessary. This program should be run for each MySQL upgrade. See Section 4.4.8, “mysql_upgrade — Check Tables for MySQL Upgrade”. (Authors: Alexey Botchkov, Mikael Widenius)
*

MySQL Cluster. MySQL Cluster is now released as a separate product, based on MySQL 5.1 but with the addition of the NDBCLUSTER storage engine. Clustering support is no longer available in mainline MySQL 5.1 releases. MySQL Cluster releases are identified by a 3-part NDB version number; currently, the MySQL Cluster NDB 6.2 and MySQL Cluster NDB 6.3 release series are available for production use.

Some of the changes in MySQL Cluster since MySQL 5.0 are listed here:
o

MySQL Cluster replication. Replication between MySQL Clusters is now supported. It is now also possible to replicate between a MySQL Cluster and a non-cluster database. See Section 17.10, “MySQL Cluster Replication”.
o

MySQL Cluster disk data storage. Formerly, the NDBCLUSTER storage engine was strictly in-memory; now, it is possible to store Cluster data (but not indexes) on disk. This allows MySQL Cluster to scale upward with fewer hardware (RAM) requirements than previously. In addition, the Disk Data implementation includes a new “no-steal” restoration algorithm for fast node restarts when storing very large amounts of data (terabyte range). See Section 17.11, “MySQL Cluster Disk Data Tables”, for more information.
o

Improved backups for MySQL Cluster. A fault arising in a single data node during a Cluster backup no longer causes the entire backup to be aborted, as occurred in previous versions of MySQL Cluster.

Many other new features and improvements have been made to the NDBCLUSTER storage engine in MySQL Cluster NDB 6.2 and MySQL Cluster NDB 6.3; for more information about these, see Section 17.14, “MySQL Cluster Development Roadmap”.
*

Backup of tablespaces. The mysqldump utility now supports an option for dumping tablespaces. Use -Y or –all-tablespaces to enable this functionality.
*

Improvements to INFORMATION_SCHEMA. MySQL 5.1 provides much more information in its metadata database than was available in MySQL 5.0. New tables in the INFORMATION_SCHEMA database include FILES, EVENTS, PARTITIONS, PROCESSLIST, ENGINES, and PLUGINS.
*

XML functions with XPath support. ExtractValue() returns the content of a fragment of XML matching a given XPath expression. UpdateXML() replaces the element selected from a fragment of XML by an XPath expression supplied by the user with a second XML fragment (also user-supplied), and returns the modified XML. See Section 11.10, “XML Functions”. (Author: Alexander Barkov)
*

Load emulator. The mysqlslap program is designed to emulate client load for a MySQL server and report the timing of each stage. It works as if multiple clients were accessing the server. See Section 4.5.7, “mysqlslap — Load Emulation Client”. (Authors: Patrick Galbraith, Brian Aker)

 

« Previous PageNext Page »