Quantcast
Channel: Symantec Connect - Articles
Viewing all 1863 articles
Browse latest View live

Controlling network traffic on a special-purpose machine using the SEP firewall

$
0
0

From time to time, a requirement comes down the pipeline in which a machine with a "special" purpose needs to be connected to the internal network. The requirements are as follows:

  • No ability to "ping" the machine
  • No inbound traffic allowed
  • Only one IP address is allowed to access this machine via port 3389 for remote administration

Meeting the above requirements can be accomplished using the SEP firewall. For the purpose of this article, I'm using SEP 12.1 RU3.

Here's a screen shot of the three firewall rules created to accomplish our goal:

untitled_39.JPG

 

To test the first rule, Block Ping, we can verify the block with a simple Nmap scan:

1_3.JPG

 

The Traffic log from the SEP firewall also verifies the ping attempt is blocked:

2_3.JPG

 

Next, we can test the second rule, Allow Remote Administration, by doing a simple RDP to the machine from the allowed IP address. The Traffic log from the SEP firewall also confirms this is working:

3_3.JPG

 

Now, I did an Nmap scan from the allowed IP address to confirm port 3389 is open, which it is:

4_3.JPG

 

I also did an Nmap scan from a disallowed IP address to confirm port 3389 is closed, which it is:

5_3.JPG

 

Lastly, we can test the third rule, Block Incoming Traffic, by attempting to connect to a share on the machine. Access is denied:

6_3.JPG

 

The Traffic log from the SEP firewall also confirms the block was successful:

7_1.JPG

 

The SEP firewall is a great tool and has endless possibilities for controlling traffic on your network. The aim of this article was to give you a small snapshot into what is possible using the firewall. I hope this is helpful to you. Please feel free to leave feedback, whether positive or negative.


Memory Optimization for SWV and SWV Streaming

$
0
0

Many users who use SWV might have problems with performance. Applications might look slower and the overall performance might be slow when activated layers are available. Most of this has to do with Microsoft's inability for using Windows memory manager correct with SWV available.

Now administrators can boost the overall performance for computers, laptops and VDI desktops with a few settings enabling Windows memory manager to perform better.

Below are a few key's that can help you in boosting performance. As always, editing the registry can result in a non function computer or even crashes.

All settings below are tested and optimised for Windows 7 and Windows 2008 r 2, 32 and 64 bit. Always remember to test thoroughly in a test environment prior to putting into production.

Your Windows Registry contains settings related to the management of memory. There are certain values, enclosed in these key's, which can be modified to change the working of system’s activities in managing memory. Described below is the procedure to reach this key.

  • Type ‘regedit’ in the search box, and hit Enter to open Registry Editor
  • Reply with Yes if prompted
  • In the left pane of registry editor, explore Computer node to view registry keys
  • Click to expand HKEY_LOCAL_MACHINE
  • In the dropped down list, find and click to expand SYSTEM key
  • In the disclosed list, locate and expand CurrentControlSet
  • Subsequently, locate and inflate Control key
  • Discover and penetrate the key named Session Manager
  • Finally, select and stay on Memory Management key

 

 

Value 1 ClearPageFileAtShutdown

Page file is the reserved memory on hard drive used as an extension of RAM. This memory contains data that was not recently used by RAM, and transferred to hard disk. It may constitute of the information stored by third party applications. Along with certain personal data, it may include the data like Usernames, Passwords, Credit Card numbers, and other security PINs. Page file can be cleared on shutdown, manually, as Windows does not do so. To clear page file at shutdown:

  • Double click this value to Modify
  • In the Value data field, change the value from 0 to 1
  • Click OK to save your changes

Value 2 DisablePagingExecutive

DisablePagingExecutive permitted systems, load kernel directly to the RAM, rather than Virtual memory, which is comparatively slower. DisablePagingExecutive, when enabled, is also helpful in debugging drivers. A 64-bit Windows may have this feature enabled, by default. 32-bit users can manually enable it by

  • Double click this value to Modify
  • In the Value data field, change the value from 0 to 1
  • Click OK to save your changes

Value 3 LargeSystemCache

An activated LargeSystemCache increases the size of system cache. It usually heightens the system performance, but lessens the physical memory space for other applications and services. This value generally facilitates the servers, whereas, workstations are suggested to shut it down through following steps

  • Double click this value to Modify
  • In the Value data field, change the value from 1 to 0
  • Click OK to save your changes

Value 4 NonPagedPoolSize / PagedPoolSize

Paged pool is a portion of memory that stores the pages with intention to move them to the page file, while non-paged pool is a contrariwise concept that stores the pages but never moves them to page file.

Enabling both the values, require postulating the exact size in bytes. It is suitable to disable these values to put the system in charge of calculating prime value, which adjusts dynamically. The value for Paged Pool size may range between 1MB to 512MB. However, it is recommended to set 192MB.

  • Double click this value to Modify
  • Change the Base from Hexadecimal to Decimal
  • In the filed next to Value data, replace 0 with 192 (making it 192MB)
  • Click OK to save your changes

Value 5 NonPagedPoolQuota / PagedPoolQuota

By enabling these values, you are aimed at limiting the memory resources to each single process. If in case, the process tries to exceed the allocated quota, it would fail. Therefore, it is recommended to leave it disabled or disable it (if enabled). Both the values are enabled by specifying the size of memory, allocated to non-paged pool and paged pool, respectively. This size ranges between 1MB and 128MB, and is assigned through Value data. On the other hand, disabling these values authorize the system to calculate an optimal value for both the entries based on current physical memory, and auto-adjusts in case of change in memory size. To disable these values

  • Double click this value to Modify
  • Make sure the Value data is 0, to make it auto-managed by system
  • Click OK to save your changes

Value 6 PhysicalAddressExtension

Physical Address Extension, generally known as PAE, is the technology that enables 32-bit operating system to access more than 4GB memory, up to 64GB or 128GB, depending on the physical address size of processor. A 64-bit system has the ability to access more than 4GB RAM, and does not necessitate PAE. If you have a 32-bit Windows with sufficient RAM, you need to enable this value in the Registry Editor.

  • Double click this value to Modify
  • In the Value data field, change the value from 0 to 1
  • Click OK to save your changes

Value 7 SessionPoolSize

The registry entry noted above deals with the memory used for the allocation of video drivers. If the size of Session Pool is pre-defined, it limits the active session from using excessive memory, and if it does, the session crashes, with a stop message. To avoid such inconvenience you are suggested to elevate the value of SessionPoolSize.

  • Double click this value to Modify
  • Change the Base, from Hexadecimal to Decimal
  • In the Value data field, change the value to 48 (making it 48MB)
  • Click OK to save your changes

Value 8 SessionViewSize

SessionViewSize is the key that is related to the desktop heaps within the active session on a server or workstation. It allocates to memory to interactive Window Station. Interactive Window Station contains group of desktop objects like Windows and menus. It has the attitude similar to SessionPoolSize, as it freezes the process, when trying to excess the allocated memory.

  • Double click this value to Modify
  • Change the Base, from Hexadecimal to Decimal
  • In the Value data field, change the value to 96 (making it 96MB)
  • Click OK to save your changes

Value 9 SystemPages

SystemPages refer to the number of page table entries (PTE), reserved to store the mapping between virtual addresses and physical addresses. This mapping is performed by dividing RAM into fixed-sized page frames. Information is stored and mapped in these page frames. If value of SystemPages has to be other than 0, it must be allocated with    the maximum value, 0xFFFFFFFF. However, it is recommended to leave it system managed, as system adjusts and calculates the optimum value for this entry, if the Value data is left 0.

  • Double click this value to Modify
  • In the Value data, make sure the text field indicates 0.
  • Click OK to save your changes

Value 10 PoolUsageMaximum

This value identifies the allowed maximum usage of paged pool. The value data of this entry signifies the percentage specification of maximum pool usage. This value may not exist in the registry, by default. In such scenario, create a New DWORD Value, and name it exactly as PoolUsageMaximum. Assigning a value data to this key identifies the range to start the Trimming process.

  • Double click this newly created value to Modify
  • In the Value data field, put ‘50’ to allow 50% usage of total paged pool before trimming starts
  • Click OK to save your changes

Symantec Clearwell eDiscovery Platform 7.1.4 Feature Briefing - Audio Search

$
0
0

I’m really pleased to announce the availability of a new Symantec Clearwell eDiscovery Platform Feature Briefing; this one is on the subject of the new Audio Search feature that will be introduced in Clearwell 7.1.4.

This FB has been put together by the SES Technical Education team.

The Symantec Clearwell eDiscovery Platform 7.1.4 introduces the ability to run phonetic based searches against a large range of files containing audio. This not only gives the ability to search the content but it also allows, in review, to choose the search terms and start playing the file from a point just before the keyword or phrase appears.

Enjoy

How PST Migration can drive your BYOD policy

$
0
0

Bring Your Own Device is a phenomenon which is attracting a lot of attention in the IT-worker space.  Bringing 'any' device to work to do your day-to-day job seems like a gift to many employees especially in environments with office workers who may be relatively cash-rich.  It does leave some headaches for IT administrators though, for example what can be done with end-users PST files that they have accumulated over the years?  In this article I'll explain how you can remove PST files from your environment (once and for all!) and how that can assist with your BYOD policies

The problem

 ballot.jpg

There are many, many problems with PST files, no matter which way you look at them:
- Multiple copies are likely to exist on both local machine locations, and network drives - backing these up is costly for users and administrators
- End-users are likely to have many, many PST files - finding useful, relevant information is costly and time-consuming
- Files can be password protected, and users forget the password or are unwilling to disclose it
- Files are relatively easy to corrupt
- Information is not available when using non-PC (or MAC) based devices, for example all those users who want to bring in their iPad to do their day to day work will struggle with accessing legacy data stored in their PST files

 I've written about BYOD before, and how Enterprise Vault can help with the overall strategy of developing BYOD for your organisation.  Take a look at that article, via the link below:

 http://www.symantec.com/connect/articles/how-does-byod-affect-your-enterprise-vault-strategy

 So, how can these two seemingly different goals of PST migration and BYOD coincide with each other and become reality?  

 The cure?

  pills.jpg

 One way is to bring PSTs and BYOD goals together.  Each powers and fuels the other, feeding each other too.  

If only you could get rid all of the old PST files that users have and provide them with a slick interface, then it would mean that when they're using their own device, PC, tablet, Mac or whatever, they'll have the benefit of having a single location to search and be able to quickly get to the information that they need to perform their tasks.  

If only an IT administrator good take over the PST file data, in a different format, and have a single copy of each item in a central location so that it can be integrated carefully into a solid backup strategy.

Well, the good news is that you can do this.

With the help of an Enterprise-class product like PST FlightDeck, PST files can be quickly ingested into Enterprise Vault and removed from network drives, and local end-user machines. Progress through a migration using PST FlightDeck is quick and users can see fantastic benefits such as being able to access a single copy of their data in a single location: their Enterprise Vault archive.

Enterprise Vault of course then provides many different ways of viewing this data, such as:

- Search, integrated into Outlook as well as browser based
- Archive Explorer
- Virtual Vault
- Shortcuts (if they were created during the migration)
- Viewing archived data from Outlook Web App

Many of these different facilities are available across-device, which folds in to the BYOD goal.  Now all of the data is available in one place, a central place, users can use 'any device' to access it.

 pills2.jpg

 The two different goals help sell each other to end-users, especially when you also consider that PST FlightDeck can cope with very old ANSI PST files by converting them to unicode.  It can instantly remove passwords from PST files.  It can de-duplicate items across multiple PSTs from a single user (for example if there is a month-old backup of a PST on a network share, and a local copy on an end-user workstation with more up to date, but largely the same data this will be consolidated before sending to Enterprise Vault).  It can also do a final backup of PST files, and perform any repairs on PST files if required.

Clearly the way

 path.jpg

 It is clear from talking with many partners and customers that the PST problem needs to be addressed - it is like a thorn in the side of many IT organisations who are doing great work in trying to secure corporate data and maintain flexibility in working, yet they have these easily obtainable non-encrypted PST files throughout end-user workstations, portable drives, and network shares.   It is also clear that for many organisations trying to push all-users-have-one-standard-laptop isn't working and a new approach such as BYOD which I described in the referenced article, or certainly some sort of hybrid approach, is needed to satisfy the way that the organisation wants end-users to work.

Getting PST file data into an archive is win-win for both projects, and using a migration tool, or even (depending on size and complexity) using native Enterprise Vault tools, will move the IT world forward, and hopefully instill some good-press for the IT team inside an organisation.

 Moving on

 snail.jpg

Once all the data is inside the users archive, the experience that end-users get is not particularly amazing today. The experience differs across different devices, and even with the introduction of third party products like CommonDesk and their ARCviewer product, it is still not as a good an end-user experience as say Outlook with Virtual Vault.  This experience though is only going to get better in coming releases of Enterprise Vault.  Whilst Symantec Partners aren't allowed to go in to detail about the new features lets just say that there are new features on the roadmap that will definitely help the experience of mobile or device-orientated users, and they'll be coming in the very near future.

Do you have a BYOD policy?  How has your organisation handled the historic PST-file problem?  Let me know in the comments below...

Moving to a new EV environment with Archive Shuttle

$
0
0

truck.jpg

There are many reasons why people appear on the Connect Forums or come to Symantec Partners saying that they have a really old version of Enterprise Vault and want to upgrade to the newest version. It could be that they just haven't had the staff to keep the Enterprise Vault environment up to date, or it could be that the version that they have 'did what it needed to do'.. at least until now.  In this article I'll explain to you how you can upgrade to the latest version of Enterprise Vault using a number of different techniques.

 The problem

 confused.jpg

Like many aspects of IT skills, budget, and pure and simple 'time' is something that often starts to fall behind or run in short supply.  Enterprise Vault appears to have a new major release about once a year, with service packs more often than that, so for some organisations keeping up to the latest-and-greatest is tricky, and costly.  This perhaps became a little harder with the introduction of Enterprise Vault 10 and the strict requirement for Windows 2008 R2 x64.

We've seen many organisations which are still running Enterprise Vault 7, and Enterprise Vault 2007 - and even a few running Enterprise Vault 6.  These organisations have just come to realise that the version that they are using is very near end-of-life.

So, however the organisation go into the state of having an old version of Enterprise Vault, how can these organisations get up to date when they need to?

Solution: Upgrades, followed by upgrades, followed by more upgrades

 bulp.jpg

Unfortunately with Enterprise Vault you can't just take an old Enterprise Vault 6 system and run a single upgrade to get to EV 10.0.4, for example.  To go up from EV 6 to EV 10 involves many steps, for example:

Check OS version is okay - eg if Windows 2003 upgrade to latest service pack.

Do backups of everything ... twice
Upgrade to EV 7
Upgrade to EV 2007
Upgrade to EV 8
Upgrade to EV 9
Move to Windows 2008 R2 x64
Upgrade to EV 10

There might even be the requirement to upgrade to a particular service pack before jumping to the appropriate major version - all that is covered in the Enterprise Vault upgrade documentation.  And of course sometimes it is not as simple as 'one big backup' at the backing and run through all of the upgrades, it might be that business continuity and/or risk management teams decide that you have to run for a few weeks with the upgraded version, before upgrading again.  If that's the case, or even if you just want to do the belt and braces approach, you'd have to do new sets of backups between each upgrade.  Time consuming - yes! Costly - yes!

All these steps are obviously very time consuming, and there is the additional step that at some point you have to switch Operating System to Windows 2008 R2 x64. You can add a little twist in to the steps above, in that when you get to EV 9 you can use the Server Settings Migration Wizard to 'jump' to new hardware and Enterprise Vault 10 in one step - but this also has some pre-requisites, especially when it comes to storage and it's position in the environment (storage needs to be remote or at least easily detachable, and re-attachable).

So whilst this approach is certainly do-able, it is one that takes quite careful planning, a lot of lab testing-time, and plenty of time to do the actual upgrade. Of course during that upgrade, or several upgrades, the services are going to be unavailable to end-users.

Are they alternatives to this long approach?  Yes...

Solution: Move Archive

 bulp2.jpg

Another possibility when it comes to solving this type of problem is to use the Enterprise Vault Move Archive feature.  It's been around for a year or two now, and in certain situations it can prove to do a migration to the latest version of Enterprise Vault quite well.  The downside though is that it only supports reasonably recent versions of Enterprise Vault, so if you source environment is older than EV 8, then this isn't going to be an option for you.

Many organisations have pre-EV 8 environments, so is there an alternative solution for them? Yes...

Solution: 3rd party products

 bulp3.jpg

Using a third party product is something that could be considered. Whilst it may cost some extra money it certainly can eliminate many of the 'endless upgrade' loops that will almost certainly be encountered when going from an old version like EV 2007 to EV 10.  It would be advisable to try to work out the costs associated by doing a one-by-one upgrade, versus the licensing costs of a third party product.  You might be surprised at the result!  One such product which is now quite mature is Archive Shuttle from QUADROtech.  It uses a synch-and-switch approach to moving the archived data from the source environment to the target environment. It copes very well with different versions of Enterprise Vault, and provides the administrators a very well thought out layout for managing the migration of the archives.  

Here is a screenshot of the interface showing the export/import progress of some test archives that I migrated:

 as_stage1.png

Archive Shuttle has proven that it is fast too, with speeds of up to 100 GB per hour when extracting data and 80 GB per hour when ingesting data - these are not just in labs, they're in real customer environments.

Conclusion

As you can see as a customer of Enterprise Vault there a number of possibilities that can be explored in order for you to get up to date with Enterprise Vault. Of course you would need to ensure that you have an appropriately purchased license before going for any of these routes. Which one you go for may depend on many factors, like project time scales, resource availability, knowledge and skill of technicians involved and so on.  

Which of these methods would you choose, and why?  Let me know in the comments below...

Can you go entirely iPad and use Enterprise Vault?

$
0
0

There has been a surge or interest in the last few years to utilise tablet based or even just small form factor computers or devices. It started back when Netbooks became popular, but exploded in to the mainstream with the Apple iPad. In the last few years I've seen all sorts of people owning an iPad. Many people that either downsized from a big old desktop computer, or bought straight into iPad-land because of it's neatness and size.

Personally, I love the compactness of the iPad, and with a small bluetooth keyboard, I'm beginning to think that you could use the iPad for 'everything' computer based, except of course if you are a hard core gamer, or programmer.

In this article, I'd like to discuss whether you can go 'all in' to use an iPad and still have a good end-user experience with Enterprise Vault.

Screen resolution, and keyboard?

 ipad.jpg

 

I don't have a retina iPad. Just a regular 'old' iPad2. But it really does do the job for 95% of what I want to do. I've seen and used a retina iPad, and if I was buying now, of course, that's what I'd go for. Will I upgrade? Not yet. So with an onscreen keyboard, and an iPad 2 comes the problem of a small part of the real estate (the screen) being usable/visible. It does make life tricky that's for sure.

When it comes to Enterprise Vault though you'd be more or less talking about emails, and some documents. At least, that's what I would primarily focus on - I'll leave things like SharePoint, and FSA and even Domino for a completely different day - that discussion might be long and fraught with more problems?!

So, to counter the small usable real estate when you have the on-screen keyboard active, and want to do Enterprise Vault and email related actions, then really, the only solution is to go for an external keyboard. These come in varying sizes, varying 'niceness' and varying prices. I went for a Logitech Ultra Thin keyboard cover/case. It is great. It is bluetooth, but you can slide the iPad into a groove along the top edge, and then you are sort of in laptop mode.

So now I have the full screen real estate in order to handle my Enterprise Vault tasks!

It's all about Safari?

 apple.jpg

 Up until about a year ago, iPads were all about Safari. Since then though, Google Chrome has come along, and Apple have done some work to more fully integrate it into an end-users workflow and experience, when using the iPad. But it is not Internet Explorer. We all know that Internet Explorer is THE browser that Enterprise Vault supports for things like:

- Browser Search
- Archive Explorer

So what can you do with mobile Safari (or Google Chrome)?

The answer is not a wholelot.  When you try to go to Archive Explorer on an iPad (using http://server/enterprisevault/archiveexplorerui.asp) you get a message on the page saying:

'Archive Explorer requires Internet Explorer 6.0 or later'.

Search is a little better (almost the same URL as above, but with /search.asp at the end).  Integrated search, which is much simpler (searcho2k.asp) is also good this way.

Where now?

So where does that leave us? We can get our corporate email pretty easily, and with the merging of inboxes into the mail client, or even the usage of third party email clients, it's possible to see all of our mails.

But what can we do when we see an archived item? How can we tell it is even an archived item? Do we want iPad users to be down in Safari, or up still in the email client?

There are a few policy settings that an Enterprise Vault administrator may wish to consider, if they want to be 'nice' to iPad (or other tablet device) users. These are:

- Create shortcuts .. or not?

If you don't create shortcuts, and have age based policies then life is good. End-users just simply won't see any archived items in their mailbox. Even if you use Quota and Age based archiving, provided you don't archive recent items, then an end-user is still unlikely to encounter a shortcut. So careful review the archiving policy for all users, or for those that are likely to use a tablet device.

- Shortcut content

At the very least I would suggest including the banner to indicate that the item is archived, otherwise end-users really might not know. You could also consider using the 'full message body', rather than just the first few hundred characters. This way the archived item appears pretty much like the non-archived item. This second option does have the downside that it will not be saving much space in the mailbox, since the item is still full-size.

Third Party or Wait?

There is a good third party component from Commondesk called ArcViewer, which can be deployed, which gives end-users on devices like iPads (and others) an Archive Explorer-like interface. I saw this a long time ago and it is good - I liked it. It does mean it's another third party to get in the archiving-mix, and it's something else that has to be deployed and supported.

If you don't want to do that though, rumours abound about new things coming in Enterprise Vault 11, which will address this. I don't want to give out specifics, but it is likely that there will be a 'client' of sorts that can be integrated into just about any IMAP client.

Conclusion

So, can you go all-iPad? I think that the answer to that is yes, you can. There are some tweaks and some issues, but largely, you can still have a good Enterprise Vault experience. You can do quite a bit of 'normal' activities with archived items, but, it would help if an administrator made some changes to Enterprise Vault. Getting used to this now will be a good thing, because there is more 'good stuff' right around the corner in the Enterprise Vault-world. Remember it's always possible to install TeamViewer and connect to a workstation should the 'need' arise.

How do you access your archived items with an iPad or other tablet? Let me know in the comments below...

Rebalancing mailbox archives with Archive Shuttle

$
0
0

Once an organisation has been using Enterprise Vault for several years there might come a time through company acquisitions or general growth that the current Enterprise Vault environment needs to be expanded outwards.  The Enterprise Vault organisation needs to grow, just like the business organisation.  In this article I'll explain a few different ways in which this growth can take place.

stones.jpg

Build new - and use

One way to expand is to simply build a whole new Enterprise Vault Server, with a new Vault Store, and new Vault Store Partitions and then simply start creating new archives on it. It will lead to a somewhat uneven balance, at least at the start, and it won't stop the current server still continuing to grow. Existing archives on the original server are still likely to be growing, with new email, and other data arriving daily still, and getting archived per the schedule and policies.

Going for this approach does mean that nothing extra needs to be done, save perhaps for some minor work on the existing server and at the very least careful monitoring of disk space usage if the server continues to consume additional space.

The newly created archives, either for new data, or for new users, will have the full run of the new server, which will start to consume space as time goes on. So gradually over time the current save will in some ways stabilise (some people will leave, and free up space that the existing users will use some of, and new users will be commissioned on the new server)

building.jpg

Wait - and swing upgrade

Sometimes organisations are fortunate enough to be able to plan for growth and relate that to a time when they plan to upgrade too.  For example upgrading from EV 9 to EV 10, or in the near future it might be when they upgrade from EV to EV 11.  Doing a swing upgrade of effectively replacing the server underneath all the data means that all of the existing archives and users will get the benefit of a new, more modern, faster system. It could also be that the storage gets expanded at the same time, or at the very least maybe new partitions are created either on existing devices or new ones, and these will then be used for the environment to grow into.  

With this approach you'll still be left with the one server, probably, but it'll be much newer, therefore faster. The problem that would still need to be addressed is storage, but Vault Storage partitions can effectively be created almost anywhere, they just need some level of management to keep them somewhat under control.  The advantage of this approach is that you haven't expanded out the number of servers, so, you're left with only one that needs maintenance, but you also have the downside of having a single point of failure.

Build new - and move

 chess.jpg

As long as an organisation has kept reasonably up to date in terms of Enterprise Vault version, the IT team might be able to make use of the Move Archive feature to move some existing mailbox archives to a new server.  So like the first option a new server is built, and commissioned, with a new vault store, and vault store partitions.  Perhaps new archives will be made on the new server going forward, but in addition, with this approach the IT team also use some of the built in features of Enterprise Vault and move some of the mailbox archives to the new hardware.

With this approach you could go from a server which is at say 90% capacity and over a period of a few weekends where you have more 'run of the system' you might move some of that data over to the new machine and new storage, leaving say 50% on the original machine, and 40% on the new machine.  Now both machines have the capability to grow - the server spec of the old machine is likely to become a problem in the medium to long term, but that's also replaceable at some point.

Build new - and balance

Another option which builds on the previous idea is to build new hardware and use third party software to help with the balancing of mailbox archives between the different systems.  One such possibility is Archive Shuttle from QUADROtech. Its easy to use web based interface presents information relating to archives in an intuitive way.  It's quite easy to define batches of users that go through a migration.  The migration happens first of all using a synchronisation approach - here data is copied from the source to the target, meaning it remains available to the users throughout the migration.  The second stage is the switch, which is usually quick and painless - just a matter of minutes. A last synchronisation takes place and then a number of steps are performed to reunite the user with their target archive, which in our scenario will be on the new machine.  There are many possibilities that can be explored with Archive Shuttle, but the previous idea fulfills the immediate need we're describing here - you can in fact use something Archive Shuttle to help with migration to the cloud as well.

Working through a balancing process like this can mean that the introduction of new hardware can be a quick and painless affair, and during the whole process users maintain access to their archived data and that's a big benefit.  It is also pretty easy to administer with a low overhead in terms of system requirements and load on the systems which are in place.

Sharing

 road.jpg

 Some of the ideas presented here are of course dependent on the sharing level within Enterprise Vault.  If a new vault store is created, in the same vault store group where sharing is configured across the group, then any movement of email archive data isn't going to have an effect on the actual data stored down on the vault store partitions.

If sharing is configured in this way, and it is a good thing to do, it might be that you would need to create a new vault store group, and a new vault store within that group.   You'd have to do that to realise some of the benefits of moving archives 'off' a main server on to a new server, otherwise the new servers archives end up just having references back to the data on the old server.

Summary

Growth is what everyone hopes for their business, but with it comes some costs, and eventually that cost spreads to things like infrastructure upgrades. We've seen here how a perfectly well designed Enterprise Vault environment may require some growth, and some attention over the space of a few years.

How has your Enterprise Vault environment grown? Let me know in the comments below...

 

Don’t Lose the Data: Six Ways You May Be Losing Mobile Data and Don’t Even Know It (White Paper)

$
0
0

When your workplace is mobile, will your business get carried away?

The mobile devices your employees love to use on their own time have now also become the business tools they use on your dime. Our recent research tells the story: 65 percent of our surveyed companies give employees network access through their own devices; 80 percent of the applications these employees use are not based on-premise, but in the cloud; and 52 percent regularly use, not one, but three or more devices.

Sure, these mobile devices – including smartphones, laptops and tablets – open up new opportunities for portable productivity. But by their very mobile nature, they also open new vulnerabilities: new ways to lose data, lose protection and lose confidence in the security of your company network.

Fortunately, productivity and protection can travel together – if you fully understand what the risks are and what you can do to mitigate them. This paper briefly reviews the top six threats to your mobile workforce, matching real-world hazards with really helpful ways you can take action and achieve the security your business requires.

Read the attached white paper for the rest of the story...

6 Ways You May Be Losing Mobile Data


What's New in IT Analytics Symantec Data Loss Prevention 3.0

$
0
0

Building on the success of the previous version of IT Analytics for Symantec Data Loss Prevention and incorporating some fantastic user feedback, Symantec has just released version 3.0 of the reporting content pack. For existing IT Analytics customers, the new version of IT Analytics for Symantec Data Loss Prevention is now available for upgrade through the Symantec Installation Manager. Some of the highlights within the new version include:

New Cube: Incident Status History

This new cube contains historical information about incident status changes within the Data Loss Prevention system, including details about who performed the change and when. Information specific to this cube includes the total number of incident actions, change date, user name, and more.
 

Cube Updates

All cubes have been updated to be more consistent with DLP nomenclature and several cubes have been updated with additional dimensions and measures to provide greater options in reporting. Additionally, the DLP Discover Scans cube has been updated to support all scan types. For cube definitions, including the list of available measures and dimensions, please see the official IT Analytics for Symantec Data Loss Prevention 3.0 User Guide.
 

New Reports

Dozens of new out-of-the-box reports were added to the new release including the following list below. Report subscriptions can be enabled for all of these reports so that they can be received via email on a reoccurring basis. For definitions of each report, please see the official IT Analytics for Symantec Data Loss Prevention 3.0 User Guide

  • DLP Auditing – User Action Auditing
  • DLP Auditing – User Event Details
  • DLP Auditing – User Incident Event Summary
  • DLP Deployment – Agent Search
  • DLP Deployment – Agent Version by Server
  • DLP Deployment – Policy Evolution Trend
  • DLP Deployment – Scan Summary
  • DLP Investigations – Discover File Incidents by File Owner Trend
  • DLP Investigations – Networking File Incidents by Networking User Trend
  • DLP Investigations – User Incident Details
  • DLP Investigations – User Incident Search
  • DLP Normalized Risk – Frequency of Discover Incidents vs. Files Scanned Trend
  • DLP Normalized Risk – Frequency of Discover Incidents vs. GB Scanned Trend
  • DLP Normalized Risk – Frequency of Email Incidents (Email Prevent)
  • DLP Normalized Risk – Frequency of Web Incidents
  • DLP Policy Optimization - Policy Change Audit
  • DLP Policy Optimization – Policy Change Impact
  • DLP Policy Optimization – Policy Change Trend
  • DLP Policy Optimization – Policy Changes
  • DLP Remediation – Discover Incident Details
  • DLP Remediation – Discover Incident Search
  • DLP Remediation – Endpoint Incident Details
  • DLP Remediation – Endpoint Incident Search
  • DLP Remediation – Incidents Search
  • DLP Remediation - Incident Status History Details
  • DLP Remediation – Network Incident Details
  • DLP Remediation – Network Incident Search
  • DLP Remediation – Remediator Productivity
  • DLP Statistics – Discover Scanned File Trend
  • DLP Statistics – Discover Scanned Storage Trend
  • DLP Statistics – Endpoint Incident Trend by Channel
  • DLP Statistics – Organizational Incident Trend
  • DLP Statistics – Incidents by Policy
  • DLP Statistics – Incidents by Product Area
  • DLP Statistics – Incidents by Severity
  • DLP Statistics – Incidents by Status
  • DLP Statistics – Incident Trend by Product Area
  • DLP Statistics – Scans
  • DLP System Management – Agent Summary by Status
  • DLP System Management – Agent Summary by Version
 

Processing Performance

Cube processing performance has been greatly improved and optimized to provide shorter processing times on average. NOTE: The processing time varies depending on the amount of data to be included in the cubes and the server hardware specifications present in your environment.

Download and install the new version today and gain greater flexibility and insight into your Symantec Data Loss Prevention reporting!

{CWoC} Patch Trending: Inactive Computer Trending Report

$
0
0

With the release of the {CWoC} Site Builder version 11 [1], comes the need for a new trending report, in order to record the count of inactive computers over time. This document contains everything you need to create the report and understand what it does, and why.

Content:

Summary:

The SQL code provided here (and in a Connect download that will be kept up to date [2]) creates 3 tables:

  • TREND_InactiveComputerCounts
  • TREND_InactiveComputer_Current
  • TREND_InactiveComputer_Previous

And a stored procedure:

  • spTrendInactiveComputers

The procedure can be simply called with:

exec spTrendInactiveComputers

in which case the tables will be populated if the last record was done more than 23 hours prior the query execution, or you can invoke the procedure using the force mode, that will cause data to be stored regardless of the previous execution time:

exec spTrendInactiveComputers @force = 1

This is useful if you have missed a schedule, or if you want to kick start the data gathering process.

So you can now trend inactive computers on your SMP by adding the above code in a report and scheduling it to run daily (either via a Task or Notification Policy.

Back to top

Background:

As you have probably have seen from my various "Patch trending" related posts (download, articles or blog entries) adding trending is a simple and important part of managing Patch Management Compliance for computer estate of varying size.

Quickly deploying patches to 30,000 computers world wide is not an easy task, especially when the target is to have 95% of the computers compliant at 95% of above (I call this the 95-95 rule), and when this ambitious target is not met we need to be able to explain why this is so.

So we need to know how many computers are inactive (off, or not reporting data) in over time, in order to factor those off-time in with the compliance data:  given we have new updates to install every 4 to 5 weeks, having computers off during that time is going to make it harder to meet the 95-95 goal.

Back to top

Selecting metrics:

Now that we have a target, we need to make sure we select the right metric to monitor the estate, and flag any problems that could be unrelated to normal time off.

I started with 2 key metrics: count of computers inactive for more than 7 days, and count of computers inactive for more than 17 days.

I have selected 7 days because it ensure we do not capture short time-off: for example with a 5 days value you would catch when someone turns off their computer on a Friday night and returns to work on the following Thursday. And it isn't too long, so we still get to see those period of inactivity.

The upper threshold is set to 17 days, so we catch any computers that have been inactive for more than a 2 weeks holiday. This give us a bracket (7 to 17 days) to calculate the count of computers out of office for 1 to 2 weeks.

I have also added to element that I found interesting: computers added or removed from the 7 days+ pool. This will make for the most interesting part of the implementation below :D.

So here is the summary of those metrics to gather:

Metric nameDescription
Managed computersCount of managed computers in the SMP database.
7 daysCount of computers inactive for more than 7 days
17 daysCount of computers inactive for more than 17 days
7 days ++Count of computers added to the "7 days" count. These are computers that were not inactive in the previous record (t -1)
7 days --Count of computers removed from the "7 days" count. These are computers that were inactive at (t -1) and that are not currently inactive i.e. they are back to Active!

Back to top

Implementation:

With some metrics selected making sure we get accurate data came next. So I started with the SQL query that brought in the questions about inactive computers: looking at the Evt_AeX_SWD_Execution and Evt_NS_Event_History tables I could see how many computers had reported data in the last 24 hours, or n days.

Joining in the Evt_NS_Client_Config_Request I had the same result set - so the computers requesting policies were also sending data - and the opposite pointed to computers being inactive from an agent standpoint.

Finally I checked the result against the data provided by running spGetComputersToPurge and it came clear that the data matched - so I decided to rely on this procedure content (I extracted the SQL) to list inactive computers for more than 7 and 17 days.

This is done with the following code:

set @inactive_1 = (
	select count(distinct(c.Guid))
	  from RM_ResourceComputer c
	 INNER JOIN
		(
		select [ResourceGuid]
		  from dbo.ResourceUpdateSummary
		 where InventoryClassGuid = '9E6F402A-6A45-4CBA-9299-C2323F73A506' 		
		 group by [ResourceGuid]
		having max([ModifiedDate]) < GETDATE() - 7
		 ) as dt 
		ON c.Guid = dt.ResourceGuid	
	 where c.IsManaged = 1
	)
set @inactive_2 = (
	select count(distinct(c.Guid))
	  from RM_ResourceComputer c
	 INNER JOIN
		(
		select [ResourceGuid]
		  from dbo.ResourceUpdateSummary
		 where InventoryClassGuid = '9E6F402A-6A45-4CBA-9299-C2323F73A506' 		
		 group by [ResourceGuid]
		having max([ModifiedDate]) < GETDATE() - 17
		 ) as dt 
		ON c.Guid = dt.ResourceGuid	
	 where c.IsManaged = 1
	)

But getting a count of inactive computers for 7 or 17 days is not sufficient to understand what is happening in the environment. If on a given day 200 computers are added to the inactive count and 195 are removed then our records will only show a delta of 5 computers. We'll have no way of calculating the churn rate - which is very important in relation to Patch Management.

Populating the "7 days ++" and "7-days --" metrics we need to compare the inactive computers dataset between the current and previous recording. As such we'll create 2 tables to store the current and previous computer guids:

if not exists (select 1 from sys.objects where type = 'u' and name = 'TREND_InactiveComputer_Current')
begin
	CREATE TABLE [TREND_InactiveComputer_Current] (guid uniqueidentifier not null, _exec_time datetime not null)
	CREATE UNIQUE CLUSTERED INDEX [IX_TREND_InactiveComputer_Current] ON [dbo].[TREND_InactiveComputer_Current] 
		(
			[Guid] ASC
	)
end

if not exists (select 1 from sys.objects where type = 'u' and name = 'TREND_InactiveComputer_Previous')
begin
	CREATE TABLE [TREND_InactiveComputer_Previous] (guid uniqueidentifier not null, _exec_time datetime not null)
	CREATE UNIQUE CLUSTERED INDEX [IX_TREND_InactiveComputer_Previous] ON [dbo].[TREND_InactiveComputer_Previous] 
		(
			[Guid] ASC
	)
end

We populate the 2 tables in this manner:

truncate table TREND_InactiveComputer_Previous
insert TREND_InactiveComputer_Previous (guid, _exec_time)
select * from TREND_InactiveComputer_Current

-- STAGE 2: Insert current data in the current table
truncate table TREND_InactiveComputer_Current
insert TREND_InactiveComputer_Current (guid, _exec_time)
select distinct(c.Guid) as 'Guid', getdate()
  from RM_ResourceComputer c
 INNER JOIN
	(
	select [ResourceGuid]
	  from dbo.ResourceUpdateSummary
	 where InventoryClassGuid = '9E6F402A-6A45-4CBA-9299-C2323F73A506' 		
	 group by [ResourceGuid]
	having max([ModifiedDate]) < GETDATE() - 7
	 ) as dt 
    ON c.Guid = dt.ResourceGuid	
 where c.IsManaged = 1

And then we calculate the added and removed computer counts:

declare @added as int, @removed as int
		 -- Added in c
			 set @added = (
					select count(*)
					  from TREND_InactiveComputer_Current c
					  full join TREND_InactiveComputer_Previous p
						on p.guid = c.guid
					 where p.guid is null
			)

			-- Removed in c
			 set @removed = (
					select count(*)
					  from TREND_InactiveComputer_Current c
					  full join TREND_InactiveComputer_Previous p
						on p.guid = c.guid
					 where c.guid is null
			)

We also need a trending table to store the daily statistics for later use. It is defined here:

if not exists (select 1 from sys.objects where type = 'u' and name = 'TREND_InactiveComputerCounts')
begin
	create table TREND_InactiveComputerCounts (
		[_exec_id] int not null,
		[timestamp] datetime not null,
		[Managed machines] int not null,
		[Inactive computers (7 days)] int not null,
		[New Inactive computers] int not null,
		[New Active computers] int not null,
		[Inactive computers (17 days)] int not null
	)
end

And we populate in this manner:

declare @execid as int
     set @execid = (select isnull(max(_exec_id), 0) from TREND_InactiveComputerCounts) + 1

insert TREND_InactiveComputerCounts (_exec_id, timestamp, [Managed machines], [inactive computers (7 days)], [New Inactive Computers], [New Active Computers], [Inactive Computers (17 days)])
values (@execid, getdate(), @managed, @inactive_1, @added, @removed, @inactive_2)

To wrap up the different task in a self contained process run the code in this manner:

if (current table is not empty) {
    truncate previous table
    insert current data into previous
    truncate current table
    insert query results into current
    count added
    count removed
    count managed machines
    count inactive 7-days
    count inactive 17-days
    insert data into trending table
} else {
    insert query results into current
}

Back to top

And here is the full code for the procedure spTrendInactiveComputers:

create procedure spTrendInactiveComputers
	@force as int = 0
as
if not exists (select 1 from sys.objects where type = 'u' and name = 'TREND_InactiveComputerCounts')
begin
	create table TREND_InactiveComputerCounts (
		[_exec_id] int not null,
		[timestamp] datetime not null,
		[Managed machines] int not null,
		[Inactive computers (7 days)] int not null,
		[New Inactive computers] int not null,
		[New Active computers] int not null,
		[Inactive computers (17 days)] int not null
	)
end

if not exists (select 1 from sys.objects where type = 'u' and name = 'TREND_InactiveComputer_Current')
begin
	CREATE TABLE [TREND_InactiveComputer_Current] (guid uniqueidentifier not null, _exec_time datetime not null)
	CREATE UNIQUE CLUSTERED INDEX [IX_TREND_InactiveComputer_Current] ON [dbo].[TREND_InactiveComputer_Current] 
		(
			[Guid] ASC
	)
end

if not exists (select 1 from sys.objects where type = 'u' and name = 'TREND_InactiveComputer_Previous')
begin
	CREATE TABLE [TREND_InactiveComputer_Previous] (guid uniqueidentifier not null, _exec_time datetime not null)
	CREATE UNIQUE CLUSTERED INDEX [IX_TREND_InactiveComputer_Previous] ON [dbo].[TREND_InactiveComputer_Previous] 
		(
			[Guid] ASC
	)
end

if ((select MAX(_exec_time) from TREND_InactiveComputer_Current where _exec_time >  dateadd(hour, -23, getdate())) is null) or (@force = 1)
begin
	-- STAGE 1: If we have current data, save it in the _previous table
	if (select count (*) from TREND_InactiveComputer_Current) > 0
		begin
			truncate table TREND_InactiveComputer_Previous
			insert TREND_InactiveComputer_Previous (guid, _exec_time)
			select * from TREND_InactiveComputer_Current

		-- STAGE 2: Insert current data in the current table
		truncate table TREND_InactiveComputer_Current
		insert TREND_InactiveComputer_Current (guid, _exec_time)
		select distinct(c.Guid) as 'Guid', getdate()
		  from RM_ResourceComputer c
		 INNER JOIN
			(
			select [ResourceGuid]
			  from dbo.ResourceUpdateSummary
			 where InventoryClassGuid = '9E6F402A-6A45-4CBA-9299-C2323F73A506' 		
			 group by [ResourceGuid]
			having max([ModifiedDate]) < GETDATE() - 7
			 ) as dt 
			ON c.Guid = dt.ResourceGuid	
		 where c.IsManaged = 1

		 --STAGE 3: Extract the add/drop counts and insert data in the trending table
		 declare @added as int, @removed as int
		 -- Added in c
			 set @added = (
					select count(*)
					  from TREND_InactiveComputer_Current c
					  full join TREND_InactiveComputer_Previous p
						on p.guid = c.guid
					 where p.guid is null
			)

			-- Removed in c
			 set @removed = (
					select count(*)
					  from TREND_InactiveComputer_Current c
					  full join TREND_InactiveComputer_Previous p
						on p.guid = c.guid
					 where c.guid is null
			)

		declare @managed as int, @inactive_1 as int, @inactive_2 as int
		set @managed = (select count(distinct(Guid)) from RM_ResourceComputer where IsManaged = 1)
		set @inactive_1 = (
			select count(distinct(c.Guid))
			  from RM_ResourceComputer c
			 INNER JOIN
				(
				select [ResourceGuid]
				  from dbo.ResourceUpdateSummary
				 where InventoryClassGuid = '9E6F402A-6A45-4CBA-9299-C2323F73A506' 		
				 group by [ResourceGuid]
				having max([ModifiedDate]) < GETDATE() - 7
				 ) as dt 
				ON c.Guid = dt.ResourceGuid	
			 where c.IsManaged = 1
		)
		set @inactive_2 = (
			select count(distinct(c.Guid))
			  from RM_ResourceComputer c
			 INNER JOIN
				(
				select [ResourceGuid]
				  from dbo.ResourceUpdateSummary
				 where InventoryClassGuid = '9E6F402A-6A45-4CBA-9299-C2323F73A506' 		
				 group by [ResourceGuid]
				having max([ModifiedDate]) < GETDATE() - 17
				 ) as dt 
				ON c.Guid = dt.ResourceGuid	
			 where c.IsManaged = 1
		)
		declare @execid as int
			set @execid = (select isnull(max(_exec_id), 0) from TREND_InactiveComputerCounts) + 1

		insert TREND_InactiveComputerCounts (_exec_id, timestamp, [Managed machines], [inactive computers (7 days)], [New Inactive Computers], [New Active Computers], [Inactive Computers (17 days)])
		values (@execid, getdate(), @managed, @inactive_1, @added, @removed, @inactive_2)
	end
	else
	begin
		truncate table TREND_InactiveComputer_Current
		insert TREND_InactiveComputer_Current (guid, _exec_time)
		select distinct(c.Guid) as 'Guid', getdate()
		  from RM_ResourceComputer c
		 INNER JOIN
			(
			select [ResourceGuid]
			  from dbo.ResourceUpdateSummary
			 where InventoryClassGuid = '9E6F402A-6A45-4CBA-9299-C2323F73A506' 		
			 group by [ResourceGuid]
			having max([ModifiedDate]) < GETDATE() - 7
			 ) as dt 
			ON c.Guid = dt.ResourceGuid	
		 where c.IsManaged = 1
	end
end

select * from TREND_InactiveComputerCounts order by _exec_id desc

Back to top

References

[1]{CWoc} Patch Trending SiteBuilder

[2]{CWoC} Patch trending stored procedures

Back to top

How to Create a Filter in Gmail That Will Deliver Connect Messages to My Inbox

$
0
0

Are you using Gmail as your primary contact email or email client for receiving Connect notifications and find that a lot of your messages are being delivered to the Spam folder?

Creating a filter that will deliver your Connect messages to the Inbox is a simple task - then you won't need to check your Spam folder for our messages ever again!

First you need to start with a message from us that has landed in your Spam folder that you know is a legitimate Connect message. Go right ahead and open it. You'll see icons in the upper right of your Gmail message window - click on the dropdown menu.

It never hurts to add us to your Contacts list but don't get distracted - we're creating a Filter.

Choose "Filter messages like this"

filter_1.png

A new window will open with a list of fields for you to fill in:

filter_2.png

This is simple - no need to add any additional information. Click "Create filter with this search>>" and the next window will open.

filter_3.png

You can get all fancy here and customize the filter, but as long as you click "Never send to Spam" your messages will no longer end up with ads for Time Shares and Cheap Pharmaceuticals.

Click "Create filter" and you are done. Simple right?

Here is the payoff. The next Connect message that would have gone to your Spam folder will be in your Inbox and have this yellow banner across the top letting you know your filter is doing its job.

filter_4.png

Sizing your PST Migration

$
0
0

Being involved in a PST Migration is something that messaging administrators are getting involved in more and more often. The migration is not always to Enterprise Vault, but there is always a seemingly endless list of things that need to be considered especially if you are performing the migration in an organisation that spans different countries.  This is sometimes because of different laws in different countries, but is usually simply because of different cultures.  Spanning timezones and international boundaries are other problems, never mind trying to figure out how much of a PST-estate you have to migrate.

spst-01.jpg

In this article I'll explain one of the very important aspects of any migration. What I will explain is the possible ways to go about figuring out how many PST files exist in an environment and how much space they consume. Hopefully this will be useful to an individual end customer, who would only need to perform the migration once, and also to consultants who might work for several customers over time, performing multiple migrations.  Those consultants will get the chance to fine tune some of the information that I will describe in this article.

The problem

PST files get everywhere!  End-users store them on their laptop, their desktop, their home folder, shared folders on the network, USB drives, and maybe even CDs or DVDs.  In fact people love them so much they probably have multiple copies of them, spread across one or more different locations, and perhaps even on different media (eg laptop and external drives).  They turn up everywhere, and those multiple copies really can cause problems, and of course skew storage calculations slightly.  Some of the PST files might truly not be needed by users any more - they might not even know that they still have them.

Finding a solution

spst-02.jpg

So how do you solve the problem of finding out where all these PST files are, how big they are, and track the ownership of them?  There are a myriad of possibilities and often side-parts of different products have ways and means of scanning for files of particular type, or file extension and producing reports on what they find.

Manual and scripting

If you don't have a software deployment tool that can scan for files of particular extension, then you can always roll your own via a script to run at login, or even run remotely depending how adventurous you are with scripting. Usually you'd gather all this data back in to some sort of CSV file ready to go into Excel or similar so that you can slice and dice the results, summarise, extrapolate and eventually calculate or guestimate a value for the total size and total quantity of PST files in the environment.

Automation

spst-03.jpg

Doing this via scripting is one way of semi-automating the capture of the information relating to PSTs. Of course the best way to gather all of this data is to have some properly automated capture - and analysis too.  One of the great new features of PST FlightDeck by QUADROtech, is that you can deploy the server side, ie the FlightDeck server, and rollout a small MSI to each workstation, called the Migration Agent, and that agent will report back PST file size, locations, and ownership information.

Once that information starts to return to the server, you can then begin the process of analysing it and work out things like:

'Who has the most PST files?'

'Who has more than 10 PST files?'

'Who has PST files totalling over 15 GB?' 

... and so on.  The small migration agent is a powerful tool because it can scan local drives, network drives, and even removable storage devices.. looking for PST files belonging to a user.  It runs outside of Outlook, which gives it some advantages over the client-driven migration that is built in to the Enterprise Vault Outlook Add-in.  In fact, it doesn't even have to eventually lead to ingesting data in to Enterprise Vault at all (Enterprise Vault is by far the most common target though!) - the target for the migration can be things like Exchange 2010 Personal Archives, or Office 365, and other destinations which are currently being developed.

There are other tools and products that can also assist with automating this data capture, and review, so it is definitely worth exploring the different options which are available.  You should way up the costs of using a full-blown solution like FlightDeck compared with trying to gather some of the statistics manually, via software deployment tools, or via scripting.

Conclusion

As you can see getting a handle on the size, and number of PST files in a PST migration project is key to understanding whether the target of the migration can cope with the influx of data.  It is not just down to the time it takes to ingest that data in to the target it's the long term ability for the target to be handle that data, have it backed up regularly, and have the data readily searchable and accessible for retrievals. Knowing the size and quantity of PST files allows for a smoother migration, and allows for secondary work to be performed on the underlying infrastructure to support the long term storage.  Gathering this data quickly and accurately is essential to the success of the migration, whatever the target for the PST data.

How have you sized your PST migration? If you're a consultant have you adapted any strategies and reused them between customers, or have you started from a blank sheet of paper each time?

Announcing the 3.10 Version of SORT - Now With NetBackup 7.6 Support

$
0
0

 The Symantec Operations Readiness Tools (SORT) team is pleased to announce that the new SORT 3.10 release includes support for the upcoming NetBackup 7.6 release.

The list below highlights the NetBackup-specific updates to SORT for this release:

General

  • Support for NetBackup 7.6!
  • Standardize OS and CPU architecture names across SORT for integration into the new Product and Platform Lookup widget
    https://sort.symantec.com/productmatrix/platform
  • Incorporate updates from changes to all NetBackup compatibility lists (Operating System (SCL) and associated features, Hardware (HCL), etc.)
  • Data collectors/custom reports and Installation & Upgrade Checklist
  • Incorporate updates for the Hot Fix / EEB Release Auditor

Installation and Upgrade Custom Report

  • Support for NetBackup 7.6
  • New checks and reporting for NetBackup 7.6 system requirements
  • New check and reporting of a remote/shared EMM server environment which is no longer supported in NetBackup 7.6
  • Inclusion of the latest NetBackup & OpsCenter hot fix information
  • Incorporates updates for the Hot Fix / EEB Release Auditor

Installation and Upgrade Checklist

  • Support for NetBackup 7.6
  • Created separate System Requirements section for Master and Media servers for better utilization of differing requirements
  • Update to the Master and Media Server System Requirements sections for NetBackup 7.6
  • Inclusion of the latest NetBackup, NetBackup Appliance and OpsCenter hot fix information
  • Support for the Windows Server 2012 platform as a Master Server
  • Support for the Ubuntu 13.04 platform
  • Support for Oracle Linux 5 and 6 as an OpsCenter Server platforms 

Visit SORT to see the value we provide to thousands of Symantec customers.

How to monitor Enterprise Vault

$
0
0

There are many different helpful articles and documents on the Symantec web site and some in the documentation which talk about how to monitor Enterprise Vault. In this article though, I'd like to present a different approach. I'd like to talk about the types of things to monitor rather than the detail of HOW to monitor the different components which make up an Enterprise Vault environment. Of course there are some fundamental things that we must cover, and we will, but there are also (I hope) some twists in the story.

Disk Space

mon-01.jpg

Disk space is nearly always the first thing that people think about when it comes to monitoring any kind of system. Whether the Enterprise Vault server is running on local direct attached disk, network storage, a SAN, or a combination of all of those, monitoring the disk free space is essential. Running low on disk space will cause Enterprise Vault to shutdown - unless the Enterprise Vault Admin Service has been configured to not monitor disk free space (and I don't think that is a particularly good idea in most circumstances). Consider though the different types of disk usage that is needed for the Enterprise Vault environment:

  • Temporary space for activities like conversion to HTML for indexing
  • PST Temporary processing
  • FSA Pass through recall
  • Vault Cache builds
  • Vault Store partition data
  • Recalled CAB files and extraction from them
  • Indexing data
  • Windows and application updates
  • PST Holding Folder

The  list is actually quite long, and where each of these things is placed in terms of the environment and the system is important. Each of them also may have differ access characteristics, and some of them are only relevant if you are doing that type of activity (eg PST Migration, Vault Cache builds and so on)

CPU Usage

mon-02.jpg

It is very likely that out of hours when your Enterprise Vault environment is busy processing mailbox archiving requests that it will be a busy bee.  During the normal working day it might be completely the opposite. You may have almost zero CPU activity during the day.  But, it's worth monitoring these sort of things, and making sure that you are not pushing the maximum that you can get hold of for too long a period, and definitely not at times when you do not expect it to be happening. Things to consider which will drive CPU load are:

  • Indexing
  • Archiving
  • Storage Expiry
  • Create collections (and migrations)
  • Vault Cache builds
  • PST Import
  • User retrievals of data
  • Queries from external systems like (for example) Discovery Accelerator

If you find that your server is constantly 'busy' .. and by that I mean it has quite high CPU .. then you may be approaching a bottleneck in terms of the extra that you can get out of the system at hand.  It used to be expensive to upgrade hardware to get more CPU's, but many people use Enterprise Vault in virtual environments now, so getting extra CPU's might be a case of being 'nice' to your virtual machine administrator!

Memory Usage

Just like CPU usage another thing to watch for is memory usage. Using the page file is bad.  Unlike CPU usage though memory usage does not always return to 'normal' once activities have been finished. Many processes on the Enterprise Vault server will 'keep' hold of memory, and only release it back through the Operating System when the Operating System requires it. This means that 'just' having high memory usage may not indicate a problem. Page file usage, or at least significant page file usage does indicate a problem though. Keep in mind the minimum and recommended requirements for Enterprise Vault, and if at all possible ensure that you have adequate memory for the processes that are being used in the environment.  The latest Enterprise Vault 10.0.4 guides suggest:

Minimum 8 GB
Recommended 16 GB

Vault Store Partitions

The Vault Store partitions are key to the operation of Enterprise Vault, they are added to pretty much every time any kind of archiving takes place, and the only respite that they might get in terms of space usage is when storage expiry runs. The Vault Store partitions though don't just need to be monitored in terms of the space that they have left, they need to also be monitored to ensure that there are no significant disk queues during normal operations. Granted during the archiving runs, or the storage expiry runs *some* disk queues may form, but again you don't want them to be significant or for prolonged periods of time.  Another thing to consider when it comes to partitions is the overall size.  You should be able to easily back up the partitions which are active/open during a reasonable sized backup window. If you can't then you have to either invest in more technology to make your backups 'faster' (eg snapshots or similar) or you need to implement a strategy to close off partitions and create new ones regularly.  Also consider for Vault Store partitions that the types of data access that are needed during the archiving window, versus the normal day to day flow of traffic from users and other processes is 'different', with the archiving activities being more a sustained push on to the disks. Take that sort of information in to account when you review and monitor the performance of the disks.

SQL Databases

mon-03.jpg

Just like the Vault Store partitions the Enterprise Vault Directory database, each Vault Store Group (aka Fingerprint) database, and each Vault Store database is likely to continue to grow day after day. Things like the Enterprise Vault Directory database may not grow overly quickly and is usually quite small when it comes to database sizes. The Vault Store database is likely to considerably though over time as it is where the information about each item in the system is stored, and the fingerprint information is stored in the corresponding Vault Store Group database. The performance of the SQL server is key to an Enterprise Vault system, and, is likely to be looked after by dedicated SQL Administrators in many organisations.  [In many organisations there might also be dedicated storage administrators]. Whilst SQL may be a bit of a black box when it comes to your organisation it doesn't hurt to know some of the basics involved with SQL.  

One of the crucial often forgotten about aspects of SQL is the maintenance plans. There is a good article which describes some very useful activities which need to be consider, so I won't repeat it all here. Take a look at:

http://www.symantec.com/docs/DOC5365

Transaction Logs

The majority of Enterprise Vault customers are archiving Exchange servers, and with that comes the fact that doing the archiving will generate Exchange transaction logs. These need to be considered at all times, particularly when archiving is first introduced or policies are changed. In fact it might be that policies are changed 'slowly' over time so as to lessen the impact on the Exchange servers.

Conclusion

In any Enterprise Vault there is a long list of inter-related activities and options that should be monitored. In most deployments it is crucial to ensure that you have a complete (or as complete as possible) list of things that you are going to monitor before delving in and picking products or tools to actually perform the monitoring. Are there any other aspects of Enterprise Vault that you consider monitoring? Let me know in the comments below.

Backup Exec 2012 Hyper V /VMware Application GRT demystified.

$
0
0

Backup Exec 2012 Hyper V /VMware Application GRT demystified.

Before we move ahead to understand how Virtual Application GRT works, let me give a short description on How application GRT Works.

In general when an application (exchange, AD, SharePoint) server is backed up with GRT enabled, there are two distinct phases of the backup.

1.            Data moved from application server to the Backup Exec media server

2.            Data is cracked open and analyzed for content, the same is being recorded and and saved in the catalogs.(In a layman’s term)

When the target media is disk storage, the data moves from the application server (The server which is being backed up.) to an IMG folder on the media server, data on the media server is cracked open and analyzed for content to build the catalogs.

 

Whereas when the target media is tape device, the data moves from the application server to the tape.  , the data residing in snapshot on the application server is cracked open and analyzed for content.

 

For example, the backup of an Exchange server would result in an IMG folder containing files similar to those listed below. 

 

You can see the Exchange LOG files and the EDB file.  Also note, the PDI.TXT and PDI_STRM.BIN files , these two files contain information about the structure of the Exchange Info-Store, for example, where the files are located on the Exchange server, the various file permissions, etc…

 

There is an IMG folder created for each application entity that is backed up.  For example, there would be an IMG folder for each Exchange Info-Store, one for each instance of SQL for AppGRT, one for Active Directory and one for each SharePoint item (ie: DLE), provided that item supports GRT.

Application GRT Backup, with Virtualization.(using VMware or Hyper-V):

Now talking about the Virtual agent backup process, a virtual agent like VMware /Hyper-V must drive the application agent like SQL Agent, Exchange Agent, and Active Directory Recovery Agent (ADRO) etc. through its phases to build its GRT view.

In order to gather all of the META DATA (necessary application information and status from the virtual machine), following conditions needs to be fulfilled.

  • Remote agent of the identical version of Backup Exec must be installed and running on it.
  • All of the application’s services must be up and running. 

The virtual backup must drive the application agent through its phase of META DATA collection before the VHD / VMDK are transferred to the media server. 

The backup process goes through the following steps, before the snapshot of the Virtual machine is taken.

  • .            Establish a connection from the media server(bengine) to the Backup Exec Remote Agent (beremote) running inside of the guest Virtual machine, selected to be backed up.
  • .            Logon to the guest with the provided credentials.
  • .            Request the Backup Exec Remote Agent running inside the Guest Virtual Machine to :

a.            take the snapshot of the necessary guest drives using the available VSS framework.

b.            Drive each application agent through a pseudo Backup, so that only Meta data of the backup is collected required for the GRT Process.

c.            Delete the snapshot of the Guest Machine Drives.

  • .            Return status information to the Backup Exec Media Server.

The process discussed above takes place much before the snapshot of the virtual machine is taken by the respective virtual agent.(VMware or Hyper-V).

And the UI displays “Preprocessing” during this stage of the virtual backup. 

During this phase, a small amount of data is generated and is stored inside the guest virtual machine.

Thus when the VHD / VMDK files are transferred to the media server, this data is transferred as well. 

And it also provides additional advantage that this data is always kept in synch with the application’s data stored inside of the VHD / VMDK files. 

This data is stored in the LOGS directory inside the guest and is deleted from the gust machine after a successful snapshot.

Phase 2:

The credentials used to logon to the guest are set for the VM resource under the virtual host. 

It is not possible to set application specific credentials. 

In other words, if you have an SQL instance that uses SQL authentication or if the application implements access permissions such that the user is unable to access the application, you will be unable to perform Virt-App GRT for the application. 

If all of four of the application GRT options are disabled (ie: unchecked) the entire metadata collection process is skipped. 

Disabling an application GRT option means that application will be silently skipped during the metadata collection process inside of the guest.

Note:

 The entire metadata process is silently skipped if the VM is powered off.

Once the VHD / VMDK files have been transferred to the media server, GRT processing begins.  GRT processing in this context means   file and folder GRT processing as well as Virt-App GRT processing.

In order to perform any sort of GRT processing, the VHD / VMDK files must be mounted so that they can be scanned for content and have that content added to the catalogs.

For VMware, the VMDKs are always mounted on the ESX server.(data store.)

For Hyper-V, the VHDs that are mounted for GRT operations depend on the target media.(Tape or disk.)

 If the target is disk, the VHDs that are mounted are the VHDs that reside on the media server. 

If the target is tape, the VHDs that are mounted are the VHDs that reside in the snapshot on the Hyper-V host.

Once the VHD / VMDK files are mounted, following things are done:

  • Registry information on the mounted guest are queried for the drive identification, (e.g assigned drive letter, boot records and so on)
  • After the VHD / VMDK files are mounted and the driver letter mapping determined, the “guest” volumes appear as “local” volumes.  So for file and folder GRT, the NTFS doesn’t actually backup any data, the only thing that happens for the “guest” volumes is that every file in the VHD / VMDK gets “cataloged” under the drive letter used by the guest.
  • Once file and folder GRT is completed, Virt-App GRT processing is started.
  • So processing starts with application discovery. 
  • PDI.TXT and PDI_STRM.BIN files inside the VHD / VMDK (These were created during the phase 1, as discussed above.) are opened and analyzed, the locations of the files inside of the VHD / VMDK files are recorded to be used while creating the soft links.
  •  In this case the links are Windows symbolic links, but depending on the Operating System and virtualization type (ie: Hyper-V vs. VMware) the links might be VLINK generated by VFF.
  • The application agent catalogs everything In other words, regardless of what the application agent supports for restore browse, all of the data in the restore browse view is coming out of the catalogs.

Once this phase is completed, the normal VERIFY job is executed and the status is updated in the database.

  • With regards to restores, the VHD / VMDK files are mounted, but not all of the VHD / VMDK files may need to be mounted.  If the VM has 2 VHD / VMDK files and the “item” to be restore  is in the first VHD / VMDK, then only the first VHD / VMDK is mounted.

Kind Regards,

S

 


ITMS 7.5 Documentation on Cloud – Try it!

$
0
0

Continuing our commitment and effort to provide easy accessibility and up to date documentation, we are introducing the ITMS 7.5 documentation through Symantec Help Center on cloud.

All available ITMS 7.5 and solution guides are accessible from this Symantec Help Center that is launched on cloud. Use the Search tab to quickly and easily find a topic. You can also apply the filters to narrow the search results based on the suite-level or solution guides. Use the Browse tab to navigate through the Table of Contents of any guide and have the same experience of browsing through a PDF.

The ITMS 7.5 documentation on cloud is available at the following URL:

http://help.symantec.com/CS?locale=EN_US&vid=v90719369_v93032876&ProdId=SYMHELPHOME&context=itms7.5

Solution-Robots with missing path (nbu 7.5.0.6)

opscenter Data Collection Issue--Solution

$
0
0

OpsCenter  Ver 7.5.0.3

(No data in reports error fixed summary)

 

 

 

 

Phase 1

 

 

àFroze the cluster and stopped Ops center.

 

àBacked up database

   D:\program files\symantec\opscenter\bin> dbbackup

 

àBacked up the opscenterserversrv.xml file

 

àChanged "-Xmx " from 4096M to 6144M (from 4GB to 6GB) in opscenterserversrv.xml file

 

 

àSimplified the database server startup and corrected a type (of the extra -m)

   File: Server_db_conf_server.conf

 

 

From

 

-n Opscenter_usclusrpt01 -x tcpip(LocalOnly=YES;ServerPort=13786) -gd DBA -gk DBA -gl DBA –gp

 4096 -ti 0 -c 8G -ch 20G  -cl 4G -zl -os 1M -m -o

"D;\Program Files\symantec\Opscenter\server\db\log\server.log" -m

 

To

 

-n Opscenter_usclusrpt01 -x tcpip(LocalOnly=YES;ServerPort=13786) -gd DBA -gk DBA -gl DBA -gp

4096 -ti 0 -c  20G  -cs -os 1M -m -o

"D;\Program Files\symantec\Opscenter\server\db\log\server.log"

 

Increased Database memory from 8GB to 20GB (Total physical Memory 32GB)

2GB for GUI

 

 

 

 

 

 

Phase 2

 

àRemoved VMware access hosts name usvhildcms02 from master server

 

 

àIncreased client timeout from 10 mins to 1 hour,

    by adding below line in /opt/SYMCOpsCenter/config

             Nbu.scl.requestTimeoutlnMillis=3600000

Opscenter\Server\config\scl.conf

 

 

 

 

 

àRestarted Opscenter

   D:\program files\symantec\opscenter\bin> Opsadmin stop

   D:\program files\symantec\opscenter\bin> Opsadmin start

 

 

àOpscenter web GUI

         Settings à configuration à usvhvoilms001 à Data Collection Disable

                                                                                 à Data Collection Enable

                                                                                  

 

    

àCurrently working on replicate the same settings over to the configuration on the inactive node “usnjnbrpt01”

    After that I will unfreeze the cluster.

 

 

Phase 3

 

Appliance Hardware Data collection Error ---fixed

 

After reviewing logs from master server and Opscenter, data collection on appliance hardware is failing because an invalid appliance hostname “nb-appliance” was added in the appliance media server list

-------------------------------------------------------------------

$ nbemmcmd -listhosts         

NBEMMCMD, Version:7.1

The following hosts were found:

 

media              nb-appliance

---------------------------------------------------------------------

 

$ vmoprcmd -devmon hs   | grep nb-appliance                                                                                    

 

                           HOST STATUS

Host Name                                  Version   Host Status

=========================================  =======   ===========

nb-appliance                               710700    DEACTIVATED

----------------------------------------------------------------------

 

 

We try deactivating but still opscenter trying to get hardware information from master server for the appliance media server “nb-appliance”

 

After confirming from Mila/Cleveland

I deleted the media sever hostname “nb-appliance” with command

 

nbemmcmd –deletehost –machinename nb-appliance –machinetype media

 

 

After deleting, Data collection on appliance hardware is success.

 

 

Backup exec error.

Adding Patch Trending to Your Symantec Management Platform Step by Step Guide

$
0
0

Table of content:

Introduction:

If you look around Connect for Patch Trending you will find a number of downloads, articles or even blog post. These are the result of a customer driven process that allowed the tool set to grow organically to something sizable.

This document aims to be the only place you need to go through to get up and running with the tool.

Top

Unpacking:

The installation pack is available from the Site Builder download page, but here is a quick link:

https://www-secure.symantec.com/connect/sites/default/files/Patch Trending Package_0.zip.

Unpack the package into a location of your choice:

1_unpack.png

Top

Installing:

Note! If your SMP is _not_ installed using the default drive and path you'll need to customise the installation directory - see below for the details.

Open an elevated command prompt and go to your package directory to run "install.bat".

The installation process will:

  • Copy SiteBuilder-v14.exe to the destination folder
  • Copy SiteBuilder-v14.exe to SiteBuilder.exe in the destination folder
  • Copy site-layout.txt to the destination folder
  • Copy web.config to the destination folder
  • Import 5 items into the SMP database

2_install.png

Top

Console items:

The SMP console will now have the following items at the root of the "Job and Task" folder:

  • Run SiteBuilder (Patch Trending)
  • RunOnce SiteBuilder (Install SQL code)
  • TRENDING Compliance by computer
  • TRENDING Compliance by update
  • TRENDING Inactive computer

3_SMP-console.png

Top

Run once:

The SiteBuilder executable contains all the required stored procedure to trend compliance by update, compliance by computer and inactive computers. To add the procedures into the db (or rest them) the site builder must be invoked with the command line option "/install".

This is done by running the task "RunOnce SiteBuilder (Install SQL code)".

4_RunOnce.png

Top

Scheduling:

Next you need to schedule the 4 remaining tasks to run daily. The trending tasks (that run the SQL) are best run at the end of the day (so you collect and display data for the day on which the collection is done) and the Site Builder task must run once the trending task completed.

5_DailySchedule.png

Here is an sample scheduling table:

Task NameSchedule
TRENDING Compliance by computer
Daily 23:45
TRENDING Compliance by updateDaily 23:49
TRENDING Inactive computerDaily 23:53
Run SiteBuilder (Patch Trending)Daily 23:57

Top

Custom destination:

If your Notification Server directory is not under the default drive and path you need to take a few additional steps from the above process to install the toolkit.

On the command line and before running install.bat you must set the installation directory in this manner:

set installdir="<desired destination folder>"

For example:

set installdir="C:\Program Files\Altiris\Patch Trending"

or

set installdir="D:\Altiris\Notification Server\Web\Patch Trending"

2_install_custom.png

Once the items are imported in the SMP console, you need to modify the 2 tasks that run site builder with your custom path:

6_CustomSiteBuilder.png

Top

Conclusion:

With the data collection and site builder scheduled to run you should be able to see some results after a couple of nightly execution (the first night should build up the site with empty graphs and the second night will bring in the data required to draw lines). 

Top

References:

[1] {CWoc} Patch Trending SiteBuilder
[2] {CWoC} Patch Trending: Adding Patch Compliance Trending Capacity ...
[3] {CWoC} Patch Trending Stored Procedures
[4] {CWoC} Patch Trending: Adding a Compliance by Computer module
[5] {CWoC} Patch Trending: Inactive Computer Trending Report

Viewing all 1863 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>