The TFS Proxy needs to be setup to run under an account which is contained in the Project Collection Proxy Service Accounts group on the TFS server. This is not possible when the proxy and the server are in two different domains for obvious reasons.
In order to overcome this, we need to fool the authentication mechanism to consider the user being used by the proxy as being part of the tfs server. To do this we will use a trick which consists of using local accounts. Consider the situation depicted in the image below. We have:
- two domains: DomainA and DomainB which do not have a trust relationship
- a TFS server located on the DomainB
- a TFS Proxy server located on DomainA which needs to cache data from the TFS server of DomainB
- several domain users in DomainB which will be used by developers located in DomainA.
To fool the authentification we need 2 local accounts (machine accounts not domain accounts) which share the same name, like TfsProxy and the same password. One account should be made on the TFS server. If the name of the server is “TfsServer” we will have the user TfsServer\TfsProxy. This user needs to be part of Project Collection Proxy Service Accounts of the project collection which will be cached. The second account needs to be created on the machine that holds the proxy. So if the name of the proxy machine is Proxy1 we will have the user Proxy1\TfsProxy.
The TFS proxy needs now to be set to run under the user Proxy1\TfsProxy, and surprise, it’s working. Next one needs to instruct visual studio to use the proxy server (Tools->Options->Source Control->Visual Studio Team Foundation Server) and connect to the TFS.
Here are some posts that contain additional info:
Anyone running big projects with TFS will eventually run into this issue. The databases are getting bigger as time goes on. There are many causes; below you have the most common and solutions to keep them in check
1. Test attachments. Whenever you run a test, be it unit test or other kind which is controlled by TFS, several files are stored in the database such as to allow easy reproduction of issues found. This is a nice feature. What is not nice is that you cannot delete them through some common feature provided by MS. What you can do is
- install Team Foundation Administration Toolkit (you can get it from the online extensions repository from Visual Studio) and run Test Attachment Sizes plugin to find out how much space is eaten by them
- run the TFS Attachment Cleaner with a configuration file specific for deleting those files that eat too much space (this might take a while so be patient)
2. Storing dll’s in the repository. Sometimes this is a must but developers overdo it and they store all kind of shit in source control. Using the Team Foundation Administration Toolkit you can “Search large files” and try to figure out what is happening. When you have to delete something use TFSSCExplorerExtension (also found on the online extensions repository) to permanently delete those files (destroy). Just marking them as deleted does not actually release space from the database.
3. Projects are adding up. This is a fact of life. What you can do, is decide what you want to archive and do so. To archive a project is not a trivial task especially when you want to keep the history. What you can do is to split the project collection in two and then delete from each of the two the ones not needed such that you end up with one project collection that contain archived projects and one containing active projects. The one with archived projects you can detach from TFS and dump it on a tape drive or whatever backup means you have. Later you can restore it by attaching it to the TFS.
That’s it for now. I can detail these steps if someone needs it
Here’s something that might help you. It’s the TeamReview addin recompiled to work with Visual Studio 2012. I take no credit whatsoever other than the small changes to make it work. The original code can be found at: http://teamreview.codeplex.com/.
You can download it right now from here: TeamReview.VS11. I will try in mean time to push the code to codeplex in it’s rightful place
I have been quite busy recently and one of the things that kept me awake was a TFS restructure and upgrade. We did a migration from TFS2010 to TFS2012 and you can imagine it was not a walk in the park but this is not what I want to share right now.
What we encountered was a problem with the warehouse processing, mainly the Work Item Syncronization Job. For one of our project collections (30k+ workitems and a lot of history and a really complex process template) this would just not finish. Putting a sql trace on it we saw that a call to GetWarehouseData was ending always after one hour. Making the same call from SQL Server Management Studio, this took actually more, like 80 minutes. We assumed that the TFS job would simply time out because the next calls were identical in parameters. We inquired Microsoft and after weeks of investigation we managed to get hold of the product team and one of the guys said: “try this indexes; some of our clients with big databases found that it had a great impact”.
No, it was not a great impact, it was a tremendous impact. That 80 minute procedure now runs in 2 second! Yikes.
Here are the indexes. Use them on your own risk. Microsoft told us they will be part of a future TFS patch, but not the next one.
CREATE NONCLUSTERED INDEX [IX_WorkItemsWere_ID_Rev]
ON [dbo].[WorkItemsWere] ([PartitionId],[ID],[Rev])
INCLUDE ([Revised Date],[Changed Date],[AreaID],[State],[Authorized Date])
CREATE NONCLUSTERED INDEX [IX_WorkItemsLatest_ID_Rev]
ON [dbo].[WorkItemsLatest] ([PartitionId],[ID],[Rev])
INCLUDE ([Revised Date],[Changed Date],[AreaID],[State],[Authorized Date])
When you try to copy the url of a document in SharePoint you end up with a huge one containing all sorts of information and which most of the time useless to you. You just want to send that in an email, you need a 20-30 characters url not some cryptic url that most of the time fails to even open when you click on it.
I found a nice blog about it and while it’s tedious to make it work (one has to make changes to each document library) it just works.
Here it is: http://blogs.technet.com/b/seanearp/archive/2010/07/09/long-url-s-in-sharepoint-2010.aspx
We recently moved from TFS2010 to TFS2012. One of the issues I encountered (could have been my bad) was that the virtual lab was not migrated.
After installing a new TestController one has to move all machines onto the new one. Unfortunately this is no longer possible if the old TestController is no longer available. The steps needed are:
1. Stop the virtual machine that you want to migrate
2. Open the properties of the virtual machine (using Virtual Machine Manager) and remove anything that is written in the description field
3. Start the virtual machine and uninstall the Virtual Lab Agent and Virtual Test Agetn
4. Install the new 2012 test agent (this acts now both as a lab and test agent)
5. Open the registry and delete the content of the key HKEY_LOCAL_MACHINESOFTWAREMicrosoftVirtual MachineExternalMicrosoft.TeamFoundation.Lab.Isolation.ServiceContractVersion
6. Fire up the “Configure Test Agent Tool” program and configure it with the new test controller
When running tests using Microsoft Test Manager, either manually triggered or as part of continuous integration, sometimes one needs to add some custom log files to the test results of a test run.
In order to do that you need to add in the cleanup procedure the following:
File.Copy(fullPathToLogFile, Path.Combine(TestContext.TestResultsDirectory, logFileName));
To get the TestContext you need to follow the steps here
The fullPathToLogFile and logFileName are selfexplanatory.
This week we got into a nasty problem; we ran out of space on the database server holding the TFS databases. One of our solutions was to rebuild the report databases using the TFS Administration Console. Unfortunately this caused more issues than it solved. What happened?
First problem: During the rebuild phase, the TFS is recreating the analysis database from scratch and then it rebuilds it. Unfortunately during the recreation, one of the xml files of the OLAP cube was generated completely empty. This caused the processing to stop. To figure this one out you have to look into the event log on the TFS server. The fix was a little bit radical. The first attempt to just delete the offending file did not succeed and thus we created a new analysis database (like here)
We waited for the TFSJobAgent to do its job and finish synchronizing the databases. That did not go right unfortunately. After waiting and waiting, we got lots of errors saying
"[Common Structures Warehouse Sync]: ---> TF221033: Job failed to acquire a lock using lock mode Shared, resource DataSync: [...].[TFS_Warehouse] and timeout 30".
This sounded kinda’ logic since we were dealing with around 150gb of TFS related databases and we had 5 project collections and jobs were fighting over getting the exclusive locks. I looked into this and tried to find a solution but … no. Microsoft was not playing nice. There are no tools available to investigate and control this kind of issues. I knew there were some services offered by TFS we could use to get a grasp on the situation. The service that was interesting to us is located in a location similar to this:
But good luck working with that. Beside needing to be logged on the TFS machine, one needs to know the name of the services you want to interrogate and interact with it. The sollution? Yeap; WCF and WPF.
I made a simple application which consumes the above said service.
Ok, that’s just a tool. What is the solution? The solution is to stop all jobs and then start them one by one, such that they are not fighting over exclusive locks . Using the above little program we just clicked stop on each job and then enabled them one by one.
The end result is a fully formed and uptodate TFS_Warehouse database (and of course all the reports running on it).
P.S. I will come back with the source/binary … it’s just too dirty to be put in public
[Update] Here is the link to the source code. Just make sure you are updating the service refference to the correct tfs instance.
I recently reset my password on GMail and was not able to reconnect the mail app from my WP7 to GMail. It turned out that, when I was asked for the new password, I entered it wrong more than 3 times and thus the Captcha activated. Unfortunately this is not caught by the WP7 mail application and even if you enter correct credentials you are told that the username/password is incorrect.
To fix this, login to Gmail and then access the following link:
https://accounts.google.com/DisplayUnlockCaptcha. This removes the captcha and thus your email application will only be required to send the username/password combination and the phone email will be synced.
For the last month I tried in vain to update my WP7 HTC Mozart from version 7740. Every time I got the message that my phone had the last version although I knew it was not true. I tried every trick I could find short of putting a custom ROM.
In fact the solution was very simple even if radical: reset phone to factory defaults, plug in to Zune, check for updates