Exhibit 4 To The Comments Of Relpromax Antitrust Inc.

5 stars based on 36 reviews

This just in from fellow Oak Table member Stefan Koehler: Fresh news from OpenWorld on The latest news from Stefan Koelher on Bug see update 1 above is that the unfinished bits of the fix for this bug are now known as bug And now a potential threat about column groups in general described in a series of three blog posts from Magnus Johansson.

With reference to a relevant MoS Doc Id and patch. Adding insult to injury — if Oracle has silently created some column groups or other extended stats on an unpatched version of Comment by piontekdd — August 2, 4: Comment by Amir Hameed — August 2, 6: Comment by Fairlie Rego — August 3, 6: I must admit my first experience with adaptive optimization was actually a happy one.

A heavy-on-CPU query that is impossible to tune 3rd party, no source, dynamic, yadda-yadda actually improved markedly once I got the db into 12c and left the adaptive optimization on. Connections via sqlnet to PDBs are problematic to say the least. Comment by Noons — August 3, sybase iq september option must be offended The really annoying thing about adaptive optimization is that sometimes it works really well, adapative execution plans in particular which, I guess, is what appeared in your case — the optimizer predicts where a problem might appear and decides on the necessary evasive action in advance.

Even then the amount of work the optimizer does to derive inflexion points can be far from cost-effective. Comment by Jonathan Lewis — August 3, Our experience is that, in a very busy RAC system, adaptive optimization can run into many issues. We actually had to disable it for one such database. For databases with moderate load profiles, we are able to live with it. Comment by VK — August 3, 5: Comment by Geert — August 3, 6: Of course, sybase iq september option must be offended can just turn off all the adaptive features….

Comment sybase iq september option must be offended Dom Brooks — August 5, 9: Comment by Jonathan Lewis — August 15, 4: SQL Plan Baselines one creates in the hope of sybase iq september option must be offended and freezing the plan for good. Comment by laimisndLaimis — August 9, 7: Do you have a link to a demonstration of directives over-riding baselines?

I would certainly expect SQL Directives to have the ability to override incomplete hints, abused SQL patches and manually sets statistics. Anything short of a completely specificied plan could be modified by dynamic sampling, so I think an outline would be much more likely to be affected by an SQL directive than an SQL Plan Baseline since outlines tend to be less well specified.

Will see if I can dig it out. That happened on 12c It could have been an isolated case of incomplete baseline like you described. For sure the baseline was created from a cursor cache — this is the technique I prefer for a quick tuning. Comment by laimisnd — August 17, 6: What was done however is this: However, the plan steps were still wrong. Comment by laimisnd — August 17, 7: Seems low probability, though. Comment by Jonathan Lewis — August 17, The scenario of there being multiple accepted plans is the only one which seems to make sybase iq september option must be offended, on the face of it.

Comment by Dom Brooks — August 17, Following this logic the same should be valid for Sybase iq september option must be offended directives too. Conceptually, hints which constitute profiles and sql patches can be viewed as extra statistics.

As to the question why SQL baseline failed to enforce a plan in my particular case — it could have been a simple glitch. It was not a big deal to fix this one case. Comment by laimisnd — August 17, 2: When the statement is parsed, the optimizer will only select the best plan from among this set.

It can happen also unlikely but can if for example CBO has reached max permutations limit. Any found plan would do, obviously.

I tried to see what happens inside CBO. The following trace SPD sections are at the end of the trace file:. Why that is done as the initial activity is not exactly clear to me. Comment by laimisnd — August 29, Laimis Oracle blog — September 1, 6: I have done a quick blog-post about them here: With default settings SQL Plan Directives will still be created, but will not automatically cause dynamic sampling or creation of column groups is this is how automatic creation sybase iq september option must be offended column groups currently works?

New mechanism for persistence of dynamic sampling query results. No longer uses result cache. Information is available to all RAC nodes and persists across instance restarts. Comment by hkpatora — September 25, 4: Comment by hkpatora — October 1, 4: Comment by Jonathan Lewis — October 1, 9: Comment by Dom Brooks — October 3, RSS feed for comments on this post.

You are commenting using your WordPress. You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email. Oracle Scratchpad August 2, Adaptive mayhem Filed under: Update 2 Sep Update 3 Nov Update 6 Jan Adding insult to injury — if Oracle has silently created some column groups or other extended stats on an unpatched version of Turn off adaptive optimization.

I second that Amir: Noons, The really annoying thing about adaptive optimization is that sometimes it works really well, adapative execution plans in particular which, I guess, is what appeared in your case — the optimizer predicts where a problem might appear and decides on the sybase iq september option must be offended evasive action in advance.

After deleting those directives the good plan was finally accepted. There is at least one corner case here: The following trace SPD sections sybase iq september option must be offended at the end of the trace file: Regards Patrick Comment by hkpatora — September 25, 4: Patrick, Thanks for that.

I think the most significant part of that note is this: We recommend that upgrades to This may be done by applying the following patches: Leave a Reply Cancel reply Enter your comment here Fill in your details below or click an icon to log in: Email required Address never made public.

Forex money exchange dubai media city

  • Binary option blueprint pdf system hit 92 win-rates with robot

    Car option dubai

  • Td options html

    Daily trading volume currencies

Dcc fatal error trade dpr1 f1027 unit not found system pas or binary equivalents dcu

  • Kuasa forex pdf download

    Etrade international wire transfer

  • Online option stock trade trading singapore keychains

    Options trading platform reviews

  • Social trader tools

    Trading my binary option with no deposit bonuses

Stock market prices today dubai jobs

46 comments Binary option bully results

Affiliate disclaimer binaryoptionspostcom

RepServer QuickRef 3 rd ed. IQ QuickRef Guide 1 st ed. Buy these books now! What is Sybase ASE? What is Sybase IQ? Miscellaneous tools Blog sap. Read our privacy guarantee. Who is Rob Verschoor? What is Sypron B. More ASE quiz questions: December Read the following, then answer 'true' or 'false' without thinking too long about it: The statement above is absolutely false, for reasons I'll explain below. This issue keeps coming back surprisingly often, even with experienced DBAs, so let me try to stamp it out once more.

ASE has a number of so-called 'minimally logged' operations, like select into and fast BCP some others are: These operations use optimizations that cause them not to generate a transaction log record for each affected row, but only log each de allocated page.

Due to these optimizations, minimally logged operations tend to be faster than their regular, fully logged, counterparts like insert In order to be able to fully recover a database in case of an unexpected crash or shutdown, ASE relies on the fact that all modifications in a database are always written to the transaction log first.

For minimally logged operations like select into , this is however not true since the individual row inserts are not written to the log. Consequently, when a minimally logged operation has been performed, ASE will be unable to recover the database if a calamity comes along.

To stop you from accidentally compromising your database integrity, minimally logged operations are disabled by default. Without this option being set, an attempt to run a minimally logged operation will typically result in an error.

Once a minimally logged operation has taken place, ASE no longer allows you to dump the transaction log. This is done to avoid creating a false sense of security that the dumped log can be used to recover the database later after all, that's what we dump transaction logs for: In order to guarantee recoverability, a full database dump should be made first. For this reason, minimally logged operations should typically not be used in production databases excluding, of course, tempdb or other databases whose recoverability may not matter.

So far, nothing new, right? Now then, truncate table is also a minimally logged operation, since it does not log every deleted row. Instead, it just deallocates all data pages and index pages, and log those deallocations, which is why it's a much faster way to clean out a table than running 'delete' which is fully logged.

It is essential to understand that simply deallocating these pages does not impact recoverability of the database, since all required information is included in the log: Likewise, if a truncate table operation is aborted, the deallocations are simply rolled back: In both cases, database integrity is still guaranteed. The fundamental difference between select into , fast BCP etc. Therefore, there is no reason to say that truncate table should never be used in a production database obviously, you gotta be careful which table you truncate, but that's a different matter.

The reason why this misunderstanding keeps popping up is that we've all learned about the advantages and dangers of minimally logged operations, without realising or having been told , that truncate table is an exception where it comes to the disadvantages.

I hope this clarifies once and for all. How would you rate this ASE quiz question? As a DBA, you want to monitor the progress of this BCP job, but without logging into the client system and looking at client app log files or, for the sake of this question, let's assume that such client-side BCP log files are accessible for you.

Assuming you know roughly how many rows it'll be doing remember, we said 10 million , this will tell you how far the job has progressed. There's actually two ways of doing this, and both involve the MDA tables. The first one is to monitor the value of monProcessStatement. RowsAffected for the session doing the BCP-in.

When queried, the value of monProcessStatement. RowsAffected shows the number of rows affected by any statement executing at that moment, including BCP-in and BCP-out, multi-statement inserts, deletes and updates as well as select statements. When the number of rows the statement will affect is roughly known, this can be used to track the statement's progress.

This value also shows the progress of a BCP-in operation, but in a different way than monProcessStatement. RowsInserted shows the total number of rows inserted into a particular table since the server was started assuming the config parameter 'number of open objects' is set high enough. This is a cumulative count for all sessions, so to track the progress of your BCP-in job, you'd need to know roughly how many rows the table contained before the BCP-in started.

RowsInserted does not count any of the inserted rows -- in contrast with monProcessStatement. RowsAffected , which does. RowsAffected will show a number between 0 and NNN. RowsInserted simply keeps counting all inserted rows. To identify sessions doing a BCP-in, look for those sessions where sysprocesses. In ASE 15, this column was added in You might wonder about alternative approaches. The problem is that this may take a long time to run for large tables.

Using the MDA table approach as described above is far superior since it provides immediate answers. Thanks to Jeff Tallman for inspiration for this question. I'll spare you the details except one: The obvious solution was to include the session-specific global variable spid in the view definition as follows: As you guessed, indeed there is a trick to work around the limitation of not allowing variables in a view definition. These little-known built-in functions provide access to the so-called 'application context', which lets you create and retrieve attributes in a session; these attributes are accessible for that session only.

If that sounds exotic, what matters here is that you can use these functions to retrieve information about some well-known aspects of a session, such as the current database, the current login ID and DB user ID etc.

For more information about these functions, see the ASE documentation. The remedy is to grant select permission to your user, login or group: If I would perhaps have a suggestion to fix things? Now, I don't blame anyone for playing with system tables and getting it wrong -- I've done it myself, and it can be most instructive. However, I'm glad I've always done such things on my own test server only -- unlike some of those customers. For example, one customer requested me to ask Sybase to implement a feature that makes it impossible to manually delete rows from system tables.

Obviously though, this customer's problem had to be solved on a different level than by adding some ASE security mechanism: Anyway, this month's quiz question looks at two things customers have asked me about: To start with the first one: When also deleting the 'dbo' user from the 'master' database, there's surprisingly little impact: Therefore, getting rid of the 'guest' user in 'master' requires a manual delete against master..

Fixing the situation where the 'guest' user has disappeared is easy: Should the 'dbo' user have gone as well, just insert this row: But first, how would you get here?

I think the most likely scenario would be something like this: When a login has no corresponding row in master.. This includes the 'sa' login, so the DBA cannot go in and repair the damage. Should you not have an already-connected 'sa' session, there's simply no way to get into ASE anymore, and you basically have to go through the restore procedure for a lost 'master' device: Avoid 'kill -9' when possible , then run 'dataserver' with the -z and -b options to recreate the 'master' device rename or copy the existing 'master' device first.

If you have a recent master DB dump you do, don't you? You should be OK again now. If you do not have a recent master DB dump, you have two reasons for banging your head against the wall so maybe do that first -- the first reason was that syslogins delete BTW.

You should then run disk reinit commands to recreate your sysdevices contents, followed by disk refit to reconstruct sysusages and sysdatabases. However, to run those disk reinit commands, you need to know the physical names, vdevno's an sizes of your database devices, and it there's no master DB dump, I'd guess this information is not available either.

There may not be an immediate pressing problem for everyone to solve with this, but it's worth having seen the trick behind the solution. When you need to determine whether a string contains a particular substring, you can use the charindex built-in function. However, let's say you need to determine whether a string contains that particular substring a specific number of times, for example, twice or thrice I've been waiting for an opportunity to use that word Doing this with charindex is possible but gets messy very quickly.

Can this be done in a better way? This function was introduced in ASE This means that the original string gets shorter by an exact multiple of characters equal to the length of 'ABC'. To take this one step further, you can use the same mechanism to figure out how many times a set of strings occurs. In the following example, table 't2' contains a list of strings for each of which we want to determine how often it occurs in the strings in table 't1': Nevertheless, I think it's one of those tricks that's may come in handy one day.

I've actually used this once -- the problem being the question which customers had ordered a particular product more than once, with the product codes concatenated in a single varchar column for each customer.

I'm still waiting for another opportunity to apply this trick First, the BCP error message points towards a var char input value being too long for the column it is copied into; the row is still inserted but the input value is truncated to the length of the column.