Stock market prices today dubai jobs46 comments
Affiliate disclaimer binaryoptionspostcom
RepServer QuickRef 3 rd ed. IQ QuickRef Guide 1 st ed. Buy these books now! What is Sybase ASE? What is Sybase IQ? Miscellaneous tools Blog sap. Read our privacy guarantee. Who is Rob Verschoor? What is Sypron B. More ASE quiz questions: December Read the following, then answer 'true' or 'false' without thinking too long about it: The statement above is absolutely false, for reasons I'll explain below. This issue keeps coming back surprisingly often, even with experienced DBAs, so let me try to stamp it out once more.
ASE has a number of so-called 'minimally logged' operations, like select into and fast BCP some others are: These operations use optimizations that cause them not to generate a transaction log record for each affected row, but only log each de allocated page.
Due to these optimizations, minimally logged operations tend to be faster than their regular, fully logged, counterparts like insert In order to be able to fully recover a database in case of an unexpected crash or shutdown, ASE relies on the fact that all modifications in a database are always written to the transaction log first.
For minimally logged operations like select into , this is however not true since the individual row inserts are not written to the log. Consequently, when a minimally logged operation has been performed, ASE will be unable to recover the database if a calamity comes along.
To stop you from accidentally compromising your database integrity, minimally logged operations are disabled by default. Without this option being set, an attempt to run a minimally logged operation will typically result in an error.
Once a minimally logged operation has taken place, ASE no longer allows you to dump the transaction log. This is done to avoid creating a false sense of security that the dumped log can be used to recover the database later after all, that's what we dump transaction logs for: In order to guarantee recoverability, a full database dump should be made first. For this reason, minimally logged operations should typically not be used in production databases excluding, of course, tempdb or other databases whose recoverability may not matter.
So far, nothing new, right? Now then, truncate table is also a minimally logged operation, since it does not log every deleted row. Instead, it just deallocates all data pages and index pages, and log those deallocations, which is why it's a much faster way to clean out a table than running 'delete' which is fully logged.
It is essential to understand that simply deallocating these pages does not impact recoverability of the database, since all required information is included in the log: Likewise, if a truncate table operation is aborted, the deallocations are simply rolled back: In both cases, database integrity is still guaranteed. The fundamental difference between select into , fast BCP etc. Therefore, there is no reason to say that truncate table should never be used in a production database obviously, you gotta be careful which table you truncate, but that's a different matter.
The reason why this misunderstanding keeps popping up is that we've all learned about the advantages and dangers of minimally logged operations, without realising or having been told , that truncate table is an exception where it comes to the disadvantages.
I hope this clarifies once and for all. How would you rate this ASE quiz question? As a DBA, you want to monitor the progress of this BCP job, but without logging into the client system and looking at client app log files or, for the sake of this question, let's assume that such client-side BCP log files are accessible for you.
Assuming you know roughly how many rows it'll be doing remember, we said 10 million , this will tell you how far the job has progressed. There's actually two ways of doing this, and both involve the MDA tables. The first one is to monitor the value of monProcessStatement. RowsAffected for the session doing the BCP-in.
When queried, the value of monProcessStatement. RowsAffected shows the number of rows affected by any statement executing at that moment, including BCP-in and BCP-out, multi-statement inserts, deletes and updates as well as select statements. When the number of rows the statement will affect is roughly known, this can be used to track the statement's progress.
This value also shows the progress of a BCP-in operation, but in a different way than monProcessStatement. RowsInserted shows the total number of rows inserted into a particular table since the server was started assuming the config parameter 'number of open objects' is set high enough. This is a cumulative count for all sessions, so to track the progress of your BCP-in job, you'd need to know roughly how many rows the table contained before the BCP-in started.
RowsInserted does not count any of the inserted rows -- in contrast with monProcessStatement. RowsAffected , which does. RowsAffected will show a number between 0 and NNN. RowsInserted simply keeps counting all inserted rows. To identify sessions doing a BCP-in, look for those sessions where sysprocesses. In ASE 15, this column was added in You might wonder about alternative approaches. The problem is that this may take a long time to run for large tables.
Using the MDA table approach as described above is far superior since it provides immediate answers. Thanks to Jeff Tallman for inspiration for this question. I'll spare you the details except one: The obvious solution was to include the session-specific global variable spid in the view definition as follows: As you guessed, indeed there is a trick to work around the limitation of not allowing variables in a view definition. These little-known built-in functions provide access to the so-called 'application context', which lets you create and retrieve attributes in a session; these attributes are accessible for that session only.
If that sounds exotic, what matters here is that you can use these functions to retrieve information about some well-known aspects of a session, such as the current database, the current login ID and DB user ID etc.
For more information about these functions, see the ASE documentation. The remedy is to grant select permission to your user, login or group: If I would perhaps have a suggestion to fix things? Now, I don't blame anyone for playing with system tables and getting it wrong -- I've done it myself, and it can be most instructive. However, I'm glad I've always done such things on my own test server only -- unlike some of those customers. For example, one customer requested me to ask Sybase to implement a feature that makes it impossible to manually delete rows from system tables.
Obviously though, this customer's problem had to be solved on a different level than by adding some ASE security mechanism: Anyway, this month's quiz question looks at two things customers have asked me about: To start with the first one: When also deleting the 'dbo' user from the 'master' database, there's surprisingly little impact: Therefore, getting rid of the 'guest' user in 'master' requires a manual delete against master..
Fixing the situation where the 'guest' user has disappeared is easy: Should the 'dbo' user have gone as well, just insert this row: But first, how would you get here?
I think the most likely scenario would be something like this: When a login has no corresponding row in master.. This includes the 'sa' login, so the DBA cannot go in and repair the damage. Should you not have an already-connected 'sa' session, there's simply no way to get into ASE anymore, and you basically have to go through the restore procedure for a lost 'master' device: Avoid 'kill -9' when possible , then run 'dataserver' with the -z and -b options to recreate the 'master' device rename or copy the existing 'master' device first.
If you have a recent master DB dump you do, don't you? You should be OK again now. If you do not have a recent master DB dump, you have two reasons for banging your head against the wall so maybe do that first -- the first reason was that syslogins delete BTW.
You should then run disk reinit commands to recreate your sysdevices contents, followed by disk refit to reconstruct sysusages and sysdatabases. However, to run those disk reinit commands, you need to know the physical names, vdevno's an sizes of your database devices, and it there's no master DB dump, I'd guess this information is not available either.
There may not be an immediate pressing problem for everyone to solve with this, but it's worth having seen the trick behind the solution. When you need to determine whether a string contains a particular substring, you can use the charindex built-in function. However, let's say you need to determine whether a string contains that particular substring a specific number of times, for example, twice or thrice I've been waiting for an opportunity to use that word Doing this with charindex is possible but gets messy very quickly.
Can this be done in a better way? This function was introduced in ASE This means that the original string gets shorter by an exact multiple of characters equal to the length of 'ABC'. To take this one step further, you can use the same mechanism to figure out how many times a set of strings occurs. In the following example, table 't2' contains a list of strings for each of which we want to determine how often it occurs in the strings in table 't1': Nevertheless, I think it's one of those tricks that's may come in handy one day.
I've actually used this once -- the problem being the question which customers had ordered a particular product more than once, with the product codes concatenated in a single varchar column for each customer.
I'm still waiting for another opportunity to apply this trick First, the BCP error message points towards a var char input value being too long for the column it is copied into; the row is still inserted but the input value is truncated to the length of the column.